Different Emails Addresses for Different Git Repositories

2020 Jun 24

Sometimes, we may want to set up different user emails and user names for different Git repositories. For example, in your personal computer, the user.email is set to your personal email address globally. While committing to your corporate repositories in the personal computer, your corporate email address should be used in the commits. Or you’re working on personal projects on the corporate computer, need to use the personal email for the personal repositories.

Configure Email Address for A Repository

The simplest way is going to each repository, and configuring the user email for each repository specifically.

$ cd /path/to/repo-foo
$ git config user.email email.foo@example.com
$ git config user.name name-foo

The above git config commands write the user.email setting into the .git/config file under the repository.

From git help config, when writing configuration, by default it’s writing to the repository-local configuration file.

When writing, the new value is written to the repository local configuration file by default, and options –system, –global, –worktree, –file can be used to tell the command to write to that location

Examine the Email Address for a Repository

$ cd /path/to/repo-foo
$ git config --list | grep user.email

The git config --list above prints more than one user.email values. It’s because, without additional options, the git config --list outputs configuration “merged” from system, global and local.

From git help config,

When reading, the values are read from the system, global and repository local configuration files by default

The git config --list is a read operation. The first user.email value above is from “global”, i.e. ~/.gitconfig. Run git config --list --local | grep user.email to check the repository-local email configuration.

Instead of piping config --list and grep, use git config --get user.email to save some typings.

$ cd /path/to/repo-foo
$ git config --get user.email

From git help config, --get returns

the last value if multiple key values were found.

Here, the last value is the email from repository-local configuration. The --get can be further omitted, git config user.email has the same result. And git config --get-all user.email is same as git config --list | grep user.email.

Conditional Includes

For new cloned repositories, it’s often to forget to configure the right email addresses for them. The “conditional includes” feature of the git config can save us from this problem.

For example, in your personal computer, all corporate repositories are under ~/corp-repo/. Add a text file called corp-gitconfig there, and edit it as below.

        name = user-name-for-corp-prj
        email = email-add@your-corp.com

Add below lines in the global git config file, i.e. ~/.gitconfig.

[includeIf "gitdir:~/corp-repo/"]
        path = ~/corp-repo/corp-gitconfig

Now if a new repository is cloned user ~/corp-repo/, the email for that repository is automatically set to email-add@your-corp.com.


Sort .csv Files by Columns in Command Line

2020 Jun 2

The sort command can be used to sort .csv files by specific columns.

Have an example .csv file like below.

$ cat orders.csv
user,date,product,amount,unit price

To sort orders by highest unit price, run the command below.

$ sort -r --field-separator=',' --key=5 -n orders.csv
user,date,product,amount,unit price

The --field-separator option (or -t) specifies , as the field separator character. By default, sort considers blank space as the field separator character. The --key=5 let sort use the fifth field of lines to sort the lines. The -n is to sort numerically, and -r is to sort in reverse order.

To fix the headers of the .csv file at the very first row after sorting, process substitution can be used.

$ cat <(head -1 orders.csv) \
      <(tail -n +2 orders.csv|sort -r --field-separator=',' --key=5 -n)
user,date,product,amount,unit price

To sort orders by highest unit price and amount, provide multiple --key options as below.

$ cat <(head -1 orders.csv) \
      <(tail -n +2 orders.csv|sort -r -t ',' -k 5 -k 4 -n)
user,date,product,amount,unit price

The format of value of --field-separator could be a bit more complex. For example, to sort orders by the day of order date, run the command below.

$ sort -t , -n -k 2.9 orders.csv
user,date,product,amount,unit price

The -k 2.9 means for each line sort uses strings which starts from the ninth position of the second field till the end of the line.

The -k 2.9,5 means for each line sort only looks at strings which starts from the ninth position of the second field and ends at the last character of the fifth field. The -k 2.9,5.2 means sort only looks at strings which starts from the ninth position of the second field and ends at the second character of the fifth field.

For more details, check the man sort.


Find a Tab in Hundreds of Tabs of Dozens of Safari Windows

2020 May 29

When use Safari in my Mac, I often keep hundreds of tabs open in dozens of Safari windows. For a recently working on tab, I may remember which Safari window it’s in, and switch to that window by looking at the snapshots of all windows brought up by Ctrl + Down arrow.

all windows

To locate a tab which I totally forget where is it in, it can be easily found via the menu bar -> Help -> Search. Type the keywords of that tab in the input box.

search tab in menubar

If you know which Safari window a tab is in, click the “double square” icon in the upper-right corner of the window (or View -> Show Tab Overview).

search tab in window

Then type some keywords of the tab to narrow down the matched result.

search tab in window

Jackson Mix-in Annotations

2020 May 20

Jackson has a feature called Mix-in annotations. With this feature, we can write cleaner code for the domain classes. Imagine we have domain classes like below.

// package com.example
public interface Item {
    ItemType getType();

public class FooItem implements Item{
    @NonNull String fooId;

    public ItemType getType() {
        return ItemType.FOO;

When implement these domain classes, no Jackson annotation is put on them. To support serialization and deserialization with Jackson for these classes, add “Mix-in” classes, for example in a separate package called com.example.jackson.

Add ItemMixin below to let Jackson be able to serialize and deserialize Item and its subclasses.

        use = JsonTypeInfo.Id.NAME,
        include = JsonTypeInfo.As.PROPERTY,
        property = "type")
        @JsonSubTypes.Type(value = FooItem.class, name = "FOO")
        // ...
public abstract class ItemMixin {
// merge the Jackson annotations in the Mix-in class into the Item class,
// as if these annotations are in the Item class
objectMapper.addMixIn(Item.class, ItemMixin.class);

String json = "[{\"fooId\": \"1\", \"type\": \"FOO\"}]";
List<Item> items = mapper.readValue(json, new TypeReference<List<Item>>(){});

Note that FooItem is implemented as an immutable class using @Value from Lombok. With @Value annotated, FooItem has no default constructor, which makes Jackson unable to serialize it by default. Add FooItemMixin below to fix it.

@JsonIgnoreProperties(value={"type"}, allowGetters=true)
abstract class FooItemMixin {
    public FooItemMixin(@JsonProperty("fooId") String fooId) {


With help of Mix-in annotations, the domain classes don’t have to be compromised for Jackson support. All Jackson relevant annotations are in separate Mix-in classes in a separate package. Further, we could provide a simple Jackson module like below.

public class MixinModule extends SimpleModule {
    public void setupModule(SetupContext context) {
        context.setMixInAnnotations(Item.class, ItemMixin.class);
        context.setMixInAnnotations(FooItem.class, FooItemMixin.class);
        log.info("module set up");

The consumers of the domain classes can simple register this module to take in all the Mix-in annotations.

objectMapper.registerModule(new MixinModule());

Jackson Mix-in helps especially if the domain classes are from a third party library. In this case, the source of the domain classes cannot be modified, using Mix-in is more elegant than writing custom serializers and deserializers.

Which Accessor Style? "Fluent" vs. "Java Bean"

2020 May 19

Basically, there are two styles of accessor naming convention in Java.

One is the traditional “Java Bean” style.

public class Item {
    ItemType getType();

Another is called the “fluent” style.

public class Item {
    ItemType type();

The fluent style saves some typing when writing code, also makes code a bit less verbose, item.type().

For example, the Lombok library supports this fluent style.

lombok.accessors.fluent = [true | false] (default: false)
If set to true, generated getters and setters will not be prefixed with the bean-standard ‘get, is or set; instead, the methods will use the same name as the field (minus prefixes).

Which style is better?

Actually, the verbose Java Bean style, is the better one. It’s because a lot of third party libraries are acknowledging the Java Bean style. For example, if Item is going to be serialized by the Jackson library as a JSON string, the fluent style wouldn’t work out of the box. Also, most of DTO mapping libraries are using the Java Bean style too. Therefore, using the standard Java Bean accessor style save effort when integrate our classes with other libraries and frameworks.

With the Lombok library and the auto-completion in modern IDEs, the Java Bean style doesn’t necessarily mean more typing.

Run Multiple Gradle Sub Project Tasks

2020 Apr 2

Use the brace expansion mechanism of Bash, multiple sub project tasks can be run in the command line like below.

$ ./gradlew sub-prj-1:{tasks,help}

After brace expansion, it is same as below, but saves some typing.

$ ./gradlew sub-prj-1:tasks sub-prj-1:help

More examples.

$ ./gradlew sub-prj-{1,2}:test
# same as
# ./gradlew sub-prj-1:test sub-prj-2:test

$ ./gradlew sub-prj-{1,2}:{clean,build}
# same as
# ./gradlew sub-prj-1:clean sub-prj-1:build sub-prj-2:clean sub-prj-2:build

The if-else Control Flow Using Optional

2020 Mar 27

Sometimes you may want to write the if-else control flow based on an Optional object.

For example, an API from a third party library declares Optional as its return type. You need to compose an if-else like control flow using that Optional object. Of course, it can be done by testing isPresent() in a traditional if-else statement.

var itemOpt = service.getItem(itemId);
if (itemOpt.isPresent()) {
} else {
    log.info("missing item {}", itemId);

The above code doesn’t take any advantage of Optional. Actually, since Java 9 the ifPresentOrElse​(Consumer<? super T>, Runnable) method can be used to implement such control flow, which is a bit more elegant.

    item -> {
    () -> {
        log.info("missing item {}", itemId);

Print Exception Together With Parameterized Log Messages

2020 Mar 5

Often when use SLF4J, you may wonder if an exception object can be printed in the same parameterized log message together with other objects. Actually SLF4J supports it since 1.6.0.

log.error("failed to add item {}", "item-id-1", new RuntimeException("failed to add item"));

The SLF4J call above prints log message like below.

13:47:16.119 [main] ERROR example.TmpTest - failed to add item item-id-1
java.lang.RuntimeException: failed to add item
    at example.TmpTest.logErrorTest(TmpTest.java:10)

So it actually doesn’t need to log as below.

log.error("failed to add item {}", "item-id-1");
log.error("exception details:", new RuntimeException("failed to add item"));

Here is what the official FAQ says.

The SLF4J API supports parametrization in the presence of an exception, assuming the exception is the last parameter.

Use the first style to save one line.

SLF4J doesn’t clearly state this behavior in the Javadoc of Logger.error(String format, Object... arguments) (at least not in the 1.7.26 version). Maybe if a new method like below is added to Logger, programmer would find out this feature more easily.

public void error(String format, Throwable t, Object... arguments);

Process Substitution

2019 Dec 19

From man bash, for “process substitution” it says

It takes the form of <(list) or >(list)

Note: there is no space between < or > and (.

Also, it says,

The process list is run with its input or output connected to a FIFO or some file in /dev/fd. The name of this file is passed as an argument to the current command as the result of the expansion.

The <(list) Form

$ diff <(echo a) <(echo b)
< a
> b

Usually the diff takes two files and compares them. The process substitution here, <(echo a), creates a file in /dev/fd, for example /dev/fd/63. The stdout of echo a command is connected to /dev/fd/63. Meanwhile, /dev/fd/63 is used as an input file/parameter of diff command. Similar for <(echo b). After Bash does the substitution, the command is like diff /dev/fd/63 /dev/fd/64. In diff’s point of view, it just compares two normal files.

In this example, one advantage of process substitution is eliminating the need of temporary files, like

$ echo a > tmp.a && echo b > tmp.b \ 
    && diff tmp.a tmp.b            \
    && rm tmp.{a,b}

The >(list) Form

$ echo . | tee >(ls)

Similar, Bash creates a file in /dev/fd when it sees >(ls). Again, let’s say the file is /dev/fd/63. Bash connects /dev/fd/63 with stdin of ls command, also the file /dev/fd/63 is used as a parameter of tee command. The tee views /dev/fd/63 as a normal file. tee writes content, here is ., into the file, and the content will “pipe” into the stdin of ls.

Compare with Pipe

Pipe, cmd-a | cmd-b, basically just passes stdout of the command on the left to the stdin of the command on the right. Its data flow is restricted, which is from left to right.

Process substitution has more freedom.

# use process substitution
$ grep -f <(echo hello) file-a
# use pipe
$ echo hello | xargs -I{} grep {} file-a

And for commands like diff <(echo a) <(echo b), it’s not easy to be done by pipe.

More Examples

$ paste <(echo hello) <(echo world)
hello   world

How it works

From man bash,

Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files.

Read more about named pipe and /dev/fd.

For /dev/fd,

The main use of the /dev/fd files is from the shell. It allows programs that use pathname arguments to handle standard input and standard output in the same manner as other pathnames.

In my OS (Ubuntu), the Bash uses /dev/fd for process substitution.

$ ls -l <(echo .)
lr-x------ 1 user user 64 12月 19 11:19 /dev/fd/63 -> pipe:[4926830]

Bash replaces <(echo .) with /dev/fd/63. The above command is like ls -l /dev/fd/63.

Or find the backing file via,

$ echo <(true)

(After my Bash does the substitution, the command becomes echo /dev/fd/63, which outputs /dev/fd/63.)

Deploy a Rails App with Docker

2019 Aug 14

This site is deployed using Docker containers on a small DigitalOcean Droplet.

Since I never have a private Docker registry, the original deploy flow was,

  • push the source code to the remote Git repository
  • on the Droplet, fetch the source code from the Git remote
  • build an Docker image from the source code
  • start the Docker container

Since some time ago, the docker build has kept failing on the bundle install instruction on the Droplet. During bundle install, the Bundler fails to build the sassc gem. The Bundler complains that,

virtual memory exhausted: Cannot allocate memory

, while building the sassc gem.

A workaround I tried was,

  • build the Docker image on a more powerful machine, like the local dev machine
  • save the image into a tar archive (docker save img-foo | bzip2 > foo.img.bz)
  • copy the tar to the Droplet
  • load the image from the tar (bunzip2 foo.img.bz && docker load -i foo.img)
  • start the Docker container

One problem of the workaround was the volume to transfer is a bit large (the image is around 1G before compression, and 370M after compression). It was time consuming over a slow network.

Current Deploy Workflow

Actually, Bundler supports fetching gems from local cache, not only from rubygems.org. Using bundle package --all can,

Copy all of the .gem files needed to run the application into the vendor/cache directory.

In the future, when running bundle install, use the gems in the cache in preference to the ones on rubygems.org.

The sassc gem is built on the local machine while bundle package, and put into vendor/cache directory.

Now, the deploy flow is like,

  • run bundle package --all if the application’s dependencies change
  • check in vendor/cache folder and push it to the Git remote
  • on the Droplet, fetch the source code from the Git remote
  • build an Docker image from the source code
  • start the Docker container

The Dockerfile is like,

USER app
RUN mkdir -p /home/app/source-code
WORKDIR /home/app/source-code
COPY vendor/cache ./vendor/cache/
COPY Gemfile* ./
# use RVM, so "bash -lc ..."
RUN bash -lc 'bundle install --deployment --without development test'
# copy the app source code
COPY . ./

Other Notes

If a gem is located at a particular git repository using the :git parameter, like

gem "rails", "2.3.8", :git => "https://github.com/rails/rails.git"

Then the --all option should be used in bundle package.

Some gems are built using CMake. When run bundle package --all, CMakeCache.txt files are created and stored in vendor/cache. The problem with CMakeCache.txt files is that they use hardcoded paths of the machine runs bundle package --all. When build the Docker image in the remote server, the hardcoded paths fail the bundle install. Therefore, in the .dockerignore file, add the following line.


← Previous