Import Files in Protocol Buffer
A Java project contains protocol buffer files (to define gRPC API).
It uses protobuf-gradle-plugin to compile the protocol buffer files.
By default this Gradle plugin reads protocol buffer source files from the src/main/proto/
directory.
The structure of the src/main/proto/
is like below.
.
├── foo
│ ├── bar
│ │ └── def.proto
│ ├── common
│ │ └── common.proto
Assume in the foo/bar/def.proto
file, it needs to import message types defined in the
foo/common/common.proto
file. To do it, in the foo/bar/def.proto
file, add lines like below.
syntax = "proto3";
package foo.bar;
import "foo/common/common.proto";
message BarType {
// use full qualified name to refer to CommonTypeA, not just `CommonTypeA`
// (assume the package name is "foo.common")
foo.common.CommonTypeA a = 1;
}
A import statement using relative path like import "../../common.proto";
does not work. Because the protocol compiler
does not search files in the “upper” directories. It only searches in directories provided by the -I/--proto_path
options or the directory in which the compiler was invoked if the -I
option not given.
The protocol compiler searches for imported files in a set of directories specified on the protocol compiler command line using the -I/–proto_path flag. If no flag was given, it looks in the directory in which the compiler was invoked. In general you should set the –proto_path flag to the root of your project and use fully qualified names for all imports.
The protobuf-gradle-plugin sets src/main/proto/
to the -I
option when invokes protoc
. It can be observed from the Gradle
info log, ./gradlew build -i
. Therefore, for the statement import "foo/common/common.proto";
, the protoc
will successfully
find the imported file under the foo/common
directory of the src/main/proto/
.
(Usually IntelliJ IDEA knows that the src/main/proto/
is the path to search imported files.
If not, add “Custom include paths” in Preferences -> Protobuf Support
.)
More documentation about the --proto_path=IMPORT_PATH
option.
IMPORT_PATH
specifies a directory in which to look for .proto files when resolving import directives. If omitted, the current directory is used. Multiple import directories can be specified by passing the--proto_path
option multiple times; they will be searched in order.-I=_IMPORT_PATH_
can be used as a short form of--proto_path
.
The jq Command Examples
- The
keys
builtin function - Array/Object Value Iterator: .[]
exp as $x | ...
and String Interpolation- More Complex Expression in String interpolation
- Array construction:
[]
- Object Construction:
{}
- Object Construction:
{}
and Array construction:[]
- The
sort
function - The
sort_by
function - Select/Filter
- Multiple Conditions in
select
Some jq
examples. All quotes are from the jq manual.
A sample json file is as below.
$ cat sample.json
{
"apple-weight": [
60
],
"orange-weight": [
50
],
"banana-weight": [
20,
35
]
}
The keys
builtin function
$ jq '. | keys' sample.json
[
"apple-weight",
"banana-weight",
"orange-weight"
]
The builtin function keys, when given an object, returns its keys in an array.
Array/Object Value Iterator: .[]
$ jq '. | keys[]' sample.json
"apple-weight"
"banana-weight"
"orange-weight"
If you use the .[index] syntax, but omit the index entirely, it will return all of the elements of an array.
Running .[] with the input [1,2,3] will produce the numbers as three separate results, rather than as a single array.
You can also use this on an object, and it will return all the values of the object.
exp as $x | ...
and String Interpolation
$ jq '. | keys[] as $k | "\($k), \(.[$k])"' sample.json
"apple-weight, [60]"
"banana-weight, [20,35]"
"orange-weight, [50]"
The expression
exp as $x | ...
means: for each value of expression exp, run the rest of the pipeline with the entire original input, and with $x set to that value. Thus as functions as something of a foreach loop.
The '. | keys[] as $k | "\($k), \(.[$k])"'
means for each value of . | keys[]
, which are “apple-weight”, “banana-weight” and “orange-weight”,
run the rest of pipeline, i.e. "\($k), \(.[$k])"
, which is string interpolation.
String interpolation - (foo)
More Complex Expression in String interpolation
$ jq '. | keys[] as $k | "\($k), \(.[$k][0])" ' sample.json
"apple-weight, 60"
"banana-weight, 20"
"orange-weight, 50"
\(.[$k][0])
is replaced with the value of .["apple-weight"][0]
.
Array construction: []
$ jq -c '. | keys[] as $k | [$k, .[$k][0]] ' sample.json
["apple-weight",60]
["banana-weight",20]
["orange-weight",50]
$ jq '[ . | keys[] as $k | [$k, .[$k][0]] ] ' sample.json
[
[
"apple-weight",
60
],
[
"banana-weight",
20
],
[
"orange-weight",
50
]
]
If you have a filter X that produces four results, then the expression [X] will produce a single result, an array of four elements.
The . | keys[] as $k | [$k, .[$k][0]]
produces three results, enclosing it with []
produces an array of these three elements.
Object Construction: {}
$ jq ' . | keys[] as $k | {category: $k, weight: .[$k][0]} ' sample.json
{
"category": "apple-weight",
"weight": 60
}
{
"category": "banana-weight",
"weight": 20
}
{
"category": "orange-weight",
"weight": 50
}
Object Construction: {}
and Array construction: []
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] ' sample.json
[
{
"category": "apple-weight",
"weight": 60
},
{
"category": "banana-weight",
"weight": 20
},
{
"category": "orange-weight",
"weight": 50
}
]
The sort
function
$ jq '[ . | keys[] as $k | [$k, .[$k][0]] ] | sort ' sample.json
[
[
"apple-weight",
60
],
[
"banana-weight",
20
],
[
"orange-weight",
50
]
]
The sort functions sorts its input, which must be an array.
Values are sorted in the following order:
null
,false
,true
, …
The [ . | keys[] as $k | [$k, .[$k][0]] ]
is an array of three elements, each of which itself is an array.
These three elements, according to the manual, are sorted “in lexical order”.
The sort_by
function
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) ' sample.json
[
{
"category": "banana-weight",
"weight": 20
},
{
"category": "orange-weight",
"weight": 50
},
{
"category": "apple-weight",
"weight": 60
}
]
sort_by(foo) compares two elements by comparing the result of foo on each element.
The [ . | keys[] as $k | {category: $k, weight: .[$k][0]} ]
is an array of three objects.
The | sort_by(.weight)
sorts these three objects by comparing their weight
property.
The final result is still an array, but sorted.
Select/Filter
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | .[] | select(.weight >= 50) ' sample.json
{
"category": "orange-weight",
"weight": 50
}
{
"category": "apple-weight",
"weight": 60
}
The function select(foo) produces its input unchanged if foo returns true for that input, and produces no output otherwise.
The [ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight)
produces a sorted array.
The following .[]
, i.e. array iterator, feeds select(.weight >= 50)
with three elements of that array.
The final result is elements whose weight
is equal or larger than 50
.
The command below, using map
, produces the same result.
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | map(select(.weight >= 50)) ' sample.json
[
{
"category": "orange-weight",
"weight": 50
},
{
"category": "apple-weight",
"weight": 60
}
]
Multiple Conditions in select
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | .[] | select( (.weight >= 50) and (.weight < 60)) ' sample.json
{
"category": "orange-weight",
"weight": 50
}
Avoid Wrong Tracking When Create Branches in Git
Just made a mistake to push commits to a wrong remote branch. Below is the detail.
- Need to create a new branch
br-x
, which needs to be based on the newest remotedev
branch. - Run
git fetch
to get newest change from the remote. - Run
git checkout -b br-x origin/dev
to create branchbr-x
. - Change and commit files in branch
br-x
. - Run
git push origin -u br-x
to push commits to the remote.
In step 3, the origin/dev
is used to as the “start-point” of the new br-x
branch. As per git branch --help
,
When a local branch is started off a remote-tracking branch, Git sets up the branch (specifically the
branch.<name>.remote
andbranch.<name>.merge
configuration entries) so that git pull will appropriately merge from the remote-tracking branch. This behavior may be changed via the global branch.autoSetupMerge configuration flag.
In other words, the git checkout -b br-x origin/dev
not only create a new br-x
branch, but also let the br-x
track
the remote dev
branch. As a result, in step 5, the git push origin -u br-x
doesn’t push commits into a same-name remote branch.
However, it pushes commits into the remote dev
branch, which the local br-x
is tracking since its creation.
The remote dev
branch is accidentally modified. 😞
To avoid it, one method is use the local dev
branch as the “start-point” in step 3. Consider the local dev
may be behind
the remote dev
. You may have to switch to the local dev
and git pull
to update it first.
Another method is using --no-track
option, i.e. git checkout -b --no-track br-x origin/dev
.
A more thorough method is using git config --global branch.autoSetupMerge false
to change the default behavior of Git.
When branch.autoSetupMerge
is false
, when create a branch, Git will not setup its tracking branch even if the “start-point” is a remote-tracking branch.
From more details search “branch.autoSetupMerge” in git config --help
.
For what is “remote-tracking” branch, check this link. Simply put,
Remote-tracking branch names take the form
<remote>/<branch>
.
Enable Junit 5 Test Cases Based on Java System Properties
JUnit 5 supports disabling test cases via the @Disabled
annotation.
Sometimes, you may want some cases to be conditionally disabled or enabled.
JUnit 5 provides many annotations to support conditional test execution, like @EnabledOnOs(MAC)
, @DisabledOnJre(JAVA_8)
and @EnabledIfEnvironmentVariable
.
Let’s say, a test case only runs when a Java system property is set to a certain value.
To do it, add @EnabledIfSystemProperty
onto the test case.
@Test
@EnabledIfSystemProperty(named = "foo.enabled", matches = "on")
void fooTest() {
}
The fooTest
runs only when the foo.enabled
system property is set to on
.
To run this test case via Gradle, type the command below.
./gradlew test --tests '*.fooTest' -i -Dfoo.enabled=on
To make the JVM running the test case know the Java system property passed to the JVM running Gradle, add below lines
in the build.gradle
file.
test {
systemProperty "foo.enabled", System.getProperty("foo.enabled")
}
How to Add Disqus to a Rails Application
Copy the “universal code” of your Disqus account from your account’s admin page,
https://<your-account>.disqus.com/admin/install/platforms/universalcode/
.
The universal code is like below.
<div id="disqus_thread"></div>
<script>
/**
* RECOMMENDED CONFIGURATION VARIABLES: EDIT AND UNCOMMENT THE SECTION BELOW TO INSERT DYNAMIC VALUES FROM YOUR PLATFORM OR CMS.
* LEARN WHY DEFINING THESE VARIABLES IS IMPORTANT: https://disqus.com/admin/universalcode/#configuration-variables*/
/*
var disqus_config = function () {
this.page.url = PAGE_URL; // Replace PAGE_URL with your page's canonical URL variable
this.page.identifier = PAGE_IDENTIFIER; // Replace PAGE_IDENTIFIER with your page's unique identifier variable
};
*/
(function() { // DON'T EDIT BELOW THIS LINE
var d = document, s = d.createElement('script');
s.src = 'https://<your-account>.disqus.com/embed.js';
s.setAttribute('data-timestamp', +new Date());
(d.head || d.body).appendChild(s);
})();
</script>
<noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
Paste it into where the Disqus comments block should display.
Set this.page.url
and this.page.identifier
and uncomment these lines to “avoid split threads and missing comments”.
For example, for this site they are set like below.
// tested in Rails 5
this.page.url = '<%= url_for host: "https://onefeed.xyz" %>';
this.page.identifier = '<%= @post.slug %>';
Update
If the Rails application is deployed behind a Nginx, where SSL/Https is enabled, and listening on an internal port like 3000.
The <%= url_for host: "https://onefeed.xyz" %>
above will generate values like http://onefeed.xyz:3000/posts/how-to-add-disqus-to-a-rails-application
.
To correct the port and the schema, use the code below.
this.page.url = '<%= url_for host: "onefeed.xyz", port: nil, protocol: "https" %>';
It generates URLs like https://onefeed.xyz/posts/how-to-add-disqus-to-a-rails-application
.
Check more in url_for API.
Different Emails Addresses for Different Git Repositories
Sometimes, we may want to set up different user emails and user names for different Git repositories.
For example, in your personal computer, the user.email
is set to your personal email address globally.
While committing to your corporate repositories in the personal computer, your corporate email address should be used in the
commits. Or you’re working on personal projects on the corporate computer, need to use the personal email for the personal
repositories.
Configure Email Address for A Repository
The simplest way is going to each repository, and configuring the user email for each repository specifically.
$ cd /path/to/repo-foo
$ git config user.email email.foo@example.com
$ git config user.name name-foo
The above git config
commands write the user.email
setting into the .git/config
file under the repository.
From git help config
, when writing configuration, by default it’s writing to the repository-local configuration file.
When writing, the new value is written to the repository local configuration file by default, and options –system, –global, –worktree, –file
can be used to tell the command to write to that location
Examine the Email Address for a Repository
$ cd /path/to/repo-foo
$ git config --list | grep user.email
user.email=email.bar@example.com
user.email=email.foo@example.com
The git config --list
above prints more than one user.email
values. It’s because, without additional options, the
git config --list
outputs configuration “merged” from system, global and local.
From git help config
,
When reading, the values are read from the system, global and repository local configuration files by default
The git config --list
is a read operation.
The first user.email
value above is from “global”, i.e. ~/.gitconfig
.
Run git config --list --local | grep user.email
to check the repository-local email configuration.
Instead of piping config --list
and grep
, use git config --get user.email
to save some typings.
$ cd /path/to/repo-foo
$ git config --get user.email
user.email=email.foo@example.com
From git help config
, --get
returns
the last value if multiple key values were found.
Here, the last value is the email from repository-local configuration.
The --get
can be further omitted, git config user.email
has the same result.
And git config --get-all user.email
is same as git config --list | grep user.email
.
Conditional Includes
For new cloned repositories, it’s often to forget to configure the right email addresses for them.
The “conditional includes” feature of the git config
can save us from this problem.
For example, in your personal computer, all corporate repositories are under ~/corp-repo/
.
Add a text file called corp-gitconfig
there, and edit it as below.
[user]
name = user-name-for-corp-prj
email = email-add@your-corp.com
Add below lines in the global git config file, i.e. ~/.gitconfig
.
[includeIf "gitdir:~/corp-repo/"]
path = ~/corp-repo/corp-gitconfig
Now if a new repository is cloned user ~/corp-repo/
, the email for that repository is automatically set to
email-add@your-corp.com
.
Sort .csv Files by Columns in Command Line
The sort
command can be used to sort .csv files by specific columns.
Have an example .csv file like below.
$ cat orders.csv
user,date,product,amount,unit price
user-2,2020-05-11,product-2,2,500
user-3,2020-04-11,product-1,2,600
user-1,2020-06-11,product-3,2,100
user-1,2020-06-21,product-1,6,600
user-1,2020-04-12,product-3,2,100
To sort orders by highest unit price, run the command below.
$ sort -r --field-separator=',' --key=5 -n orders.csv
user-3,2020-04-11,product-1,2,600
user-1,2020-06-21,product-1,6,600
user-2,2020-05-11,product-2,2,500
user-1,2020-06-11,product-3,2,100
user-1,2020-04-12,product-3,2,100
user,date,product,amount,unit price
The --field-separator
option (or -t
) specifies ,
as the field separator character. By default, sort
considers
blank space as the field separator character.
The --key=5
let sort
use the fifth field of lines to sort the lines.
The -n
is to sort numerically, and -r
is to sort in reverse order.
To fix the headers of the .csv file at the very first row after sorting, process substitution can be used.
$ cat <(head -1 orders.csv) \
<(tail -n +2 orders.csv|sort -r --field-separator=',' --key=5 -n)
user,date,product,amount,unit price
user-3,2020-04-11,product-1,2,600
user-1,2020-06-21,product-1,6,600
user-2,2020-05-11,product-2,2,500
user-1,2020-06-11,product-3,2,100
user-1,2020-04-12,product-3,2,100
To sort orders by highest unit price and amount, provide multiple --key
options as below.
$ cat <(head -1 orders.csv) \
<(tail -n +2 orders.csv|sort -r -t ',' -k 5 -k 4 -n)
user,date,product,amount,unit price
user-1,2020-06-21,product-1,6,600
user-3,2020-04-11,product-1,2,600
user-2,2020-05-11,product-2,2,500
user-1,2020-06-11,product-3,2,100
user-1,2020-04-12,product-3,2,100
The format of value of --field-separator
could be a bit more complex.
For example, to sort orders by the day of order date, run the command below.
$ sort -t , -n -k 2.9 orders.csv
user,date,product,amount,unit price
user-1,2020-06-11,product-3,2,100
user-2,2020-05-11,product-2,2,500
user-3,2020-04-11,product-1,2,600
user-1,2020-04-12,product-3,2,100
user-1,2020-06-21,product-1,6,600
The -k 2.9
means for each line sort
uses strings which starts from the ninth position of the second field till the end of the line.
The -k 2.9,5
means for each line sort
only looks at strings which starts from the ninth position of the second field and ends at the last character
of the fifth field.
The -k 2.9,5.2
means sort
only looks at strings which starts from the ninth position of the second field and ends at the second character
of the fifth field.
For more details, check the man sort
.
Find a Tab in Hundreds of Tabs of Dozens of Safari Windows
When use Safari in my Mac, I often keep hundreds of tabs open in dozens of Safari windows.
For a recently working on tab, I may remember which Safari window it’s in, and switch to that window by looking at
the snapshots of all windows brought up by Ctrl + Down arrow
.

To locate a tab which I totally forget where is it in, it can be easily found via the menu bar -> Help -> Search. Type the keywords of that tab in the input box.

If you know which Safari window a tab is in, click the “double square” icon in the upper-right corner of the window (or View -> Show Tab Overview).

Then type some keywords of the tab to narrow down the matched result.

Jackson Mix-in Annotations
Jackson has a feature called Mix-in annotations. With this feature, we can write cleaner code for the domain classes. Imagine we have domain classes like below.
// package com.example
public interface Item {
ItemType getType();
}
@Value
public class FooItem implements Item{
@NonNull String fooId;
@Override
public ItemType getType() {
return ItemType.FOO;
}
}
When implement these domain classes, no Jackson annotation is put on them.
To support serialization and deserialization with Jackson for these classes, add “Mix-in” classes, for example in a
separate package called com.example.jackson
.
Add ItemMixin
below to let Jackson be able to serialize and deserialize Item
and its subclasses.
@JsonTypeInfo(
use = JsonTypeInfo.Id.NAME,
include = JsonTypeInfo.As.PROPERTY,
property = "type")
@JsonSubTypes({
@JsonSubTypes.Type(value = FooItem.class, name = "FOO")
// ...
})
public abstract class ItemMixin {
}
// merge the Jackson annotations in the Mix-in class into the Item class,
// as if these annotations are in the Item class
objectMapper.addMixIn(Item.class, ItemMixin.class);
String json = "[{\"fooId\": \"1\", \"type\": \"FOO\"}]";
List<Item> items = mapper.readValue(json, new TypeReference<List<Item>>(){});
Note that FooItem
is implemented as an immutable class using @Value
from Lombok.
With @Value
annotated, FooItem
has no default constructor, which makes Jackson unable to serialize it by default.
Add FooItemMixin
below to fix it.
@JsonIgnoreProperties(value={"type"}, allowGetters=true)
abstract class FooItemMixin {
@JsonCreator
public FooItemMixin(@JsonProperty("fooId") String fooId) {
}
}
With help of Mix-in annotations, the domain classes don’t have to be compromised for Jackson support. All Jackson relevant annotations are in separate Mix-in classes in a separate package. Further, we could provide a simple Jackson module like below.
@Slf4j
public class MixinModule extends SimpleModule {
@Override
public void setupModule(SetupContext context) {
context.setMixInAnnotations(Item.class, ItemMixin.class);
context.setMixInAnnotations(FooItem.class, FooItemMixin.class);
log.info("module set up");
}
}
The consumers of the domain classes can simple register this module to take in all the Mix-in annotations.
objectMapper.registerModule(new MixinModule());
Jackson Mix-in helps especially if the domain classes are from a third party library. In this case, the source of the domain classes cannot be modified, using Mix-in is more elegant than writing custom serializers and deserializers.
Which Accessor Style? "Fluent" vs. "Java Bean"
Basically, there are two styles of accessor naming convention in Java.
One is the traditional “Java Bean” style.
public class Item {
ItemType getType();
}
Another is called the “fluent” style.
public class Item {
ItemType type();
}
The fluent style saves some typing when writing code, also makes code a bit less verbose, item.type()
.
For example, the Lombok library supports this fluent style.
lombok.accessors.fluent = [true | false]
(default: false)
If set to true, generated getters and setters will not be prefixed with the bean-standard ‘get, is or set; instead, the methods will use the same name as the field (minus prefixes).
Which style is better?
Actually, the verbose Java Bean style, is the better one.
It’s because a lot of third party libraries are acknowledging the Java Bean style.
For example, if Item
is going to be serialized by the Jackson library as a JSON string, the fluent style wouldn’t
work out of the box.
Also, most of DTO mapping libraries are using the Java Bean style too.
Therefore, using the standard Java Bean accessor style save effort when integrate our classes with other libraries and frameworks.
With the Lombok library and the auto-completion in modern IDEs, the Java Bean style doesn’t necessarily mean more typing.