Why @Mock Annotated Object Is Null

2021 May 19

Mockito provides annotations like @Mock, @Spy, @Captor to make code simpler. Since most of our code is just copied from Google search and Stack Overflow, we may use these Mockito annotations like below, after quick reading of Google search results like this article.

@RunWith(MockitoJUnitRunner.class)
class OrderTest {
    @Captor
    ArgumentCaptor<Offer> offerCaptor;

    @Test
    void test() {
      // use the `offerCaptor` object ...
    }

However, when run, the test throws NullPointerException due to the offerCaptor is null, even though it’s annotated by @Captor.

(Spending hours of Googling and debugging.)

The root cause of NPE is that the test is using JUnit 5, but @RunWith is a JUnit 4 annotation.

@RunWith no longer exists; superseded by @ExtendWith.

Therefore, @RunWith(MockitoJUnitRunner.class) is totally ignored by JUnit 5, no setup happens for objects annotated by @Captor.

To support JUnit 5 @ExtendWith, Mockito provides a MockitoExtension class.

This extension is the JUnit Jupiter equivalent of our JUnit4 MockitoJUnitRunner.

Below code has no NPE, for test cases using JUnit 5 and Mockito annotations.

@ExtendWith(MockitoExtension.class)
class OrderTest {
    @Captor
    ArgumentCaptor<Offer> offerCaptor;

Some Thoughts

It would be better if JUnit 5 warns us if it finds the obsolete JUnit 4 @RunWith, instead of failing silently.

Another solution is that, if a project is going to use JUnit 5, just exclude the JUnit 4 from the dependencies of the project. So using @RunWith would be a compile error.

Fix a Helm "Secret Too Long" Bug

2021 Mar 23

When try to upgrade a Helm chart, it failed with an error like below.

Error: UPGRADE FAILED: create: failed to create: Secret "sh.helm.release..." is invalid: data: Too long: must have at most 1048576 bytes

Helm by default uses Kubernetes Secrets to store release information. For what is “release information”,

The release information includes the contents of charts and values files

See Storage backends for more details.

In my case, some certificates (after encryption) are stored in the chart files. These certificates are for different deploy environments. To solve this error for my chart, a simple solution to create a script to dynamically “helm ignore” files which are not needed for a Helm release for a deploy environment. For example, add files which are for production into the .helmignore file when deploy for the staging environment.

Helm also has a beta feature called “SQL storage backend” to store release information in a database, for a big chart.

Using such a storage backend is particularly useful if your release information weighs more than 1MB

CompletableFuture: join() vs. get()

2021 Mar 23

What’s the difference between join() and get() of the CompletableFuture class?

Both of them wait for the future completion. The only difference is what kind of exceptions these two methods throw. The get(), originated in the Future interface, throws “checked exceptions”. While join() throws “unchecked exceptions”.

In old days, like Java 1.5 era when Future.get() was designed, checked exceptions might be preferred. Nowadays, unchecked exceptions are more preferred, therefore join() is preferred.

As per Javadoc of CompletableFuture,

To simplify usage in most contexts, this class also defines methods join() and getNow(T) that instead throw the CompletionException directly in these cases.

In other words, join() is simpler to use, in the sense of exception handling. It’s kind of an official recommendation.

Compare Two OffsetDateTime of the Same Instant

2021 Mar 16

For two Java OffsetDateTime objects of the same instant, but with different timezone offset, to check whether they represent the same point in time, the isEqual method should be used.

var date1 = OffsetDateTime.parse("2008-12-03T11:00+01:00");
var date2 = OffsetDateTime.parse("2008-12-03T12:00+02:00");
date1.isEqual(date2); // the result is true
date1.equals(date2); // the result is false
date1.compareTo(date2); // the result < 0

In the above code, both of the OffsetDateTime objects represent “2008-12-03T10:00Z”. However, the equals or the compareTo method will tell these two object are not “equal”.

The OffsetDateTime class uses a field of LocalDateTime and a field of ZoneOffset to represent an instant. Its equals method cares about whether all these field of two objects are same or not. Obviously, fields of date1 are not equals with ones of date2.

The compareTo has same behavior as equals.

It is “consistent with equals”, as defined by Comparable.

As per Javadoc of OffsetDateTime, what isEqual does is dateTime1.toInstant().equals(dateTime2.toInstant()). That is OffsetDateTime object are convert to Instant object first, and use the equals of Instant for comparison.

From the Javadoc of Instant,

An instantaneous point on the time-line.

the class stores a long representing epoch-seconds and an int representing nanosecond-of-second, … The epoch-seconds are measured from the standard Java epoch of 1970-01-01T00:00:00Z

An Instant object has no timezone info stored, only epoch of fixed UTC timezone stored, a long and a int.

Side note: what’s the different between ZonedDateTime and OffsetDateTime

The major difference is that ZonedDateTime knows Daylight saving time (DST), while OffsetDateTime does not. It’s because OffsetDateTime only stores simple offset to UTC, while ZonedDateTime stores much richer timezone info.

Git Command Examples

2021 Mar 2

Checkout a Remote Branch in Local

$ git checkout --track origin/master

The above command creates a local branch with the same name, i.e. master, as the remote branch, and let the local branch track the remote one. “Tracking” means when run git push, Git knows where it pushes changes into.

Some notes from git checkout --help,

As a convenience, –track without -b implies branch creation

-t, –track
When creating a new branch, set up “upstream” configuration.

If no -b option is given, the name of the new branch will be derived from the remote-tracking branch

# assume the Git tag is "0.1.0"
$ git rev-list -n 1 0.1.0 --pretty=format:"%h" | tail -1
c363005

The tag “0.1.0” points to the commit c363005. Use %H if the full SHA1 is needed.

(Search “placeholder” in git show --help for the document of format:<string>.)

Add --abbrev option, like --abbrev=8, if a fixed width SHA1 is needed.

Fix the ^M Character Shown in git diff Result

Sometimes, when run git diff, it prints ^M at the end of some lines. The ^M character represents a carriage-return character, i.e. CRLF, the new line character in Windows. You may see ^M before, if you use Vim to edit some files coming for Windows/DOS. Seeing ^M in the git diff result means the same line was ended with CRLF but now with LF, or vice versa.

Usually, a Git repository should be configured in a way that all text files committed into the repository end with LF, while files checked out end the local machine specific endings, i.e. LF in Unix and CRLF in Windows machines. So that ^M would not be seen in git diff. To fix a repository’s configuration, add a .gitattributes file with content like

# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto

# Declare files that will always have CRLF line endings on checkout.
*.bat text eol=crlf

“Renormalize” all the files with updated configuration.

$ git stash -u
$ git add --renormalize .
$ git status
$ git commit -m "Normalize line endings"
$ git stash pop

See this GitHub doc, Configuring Git to handle line endings, for more details.

git

Mod in Java Produces Negative Numbers

2021 Feb 1

The problem here is that in Python the % operator returns the modulus and in Java it returns the remainder. These functions give the same values for positive arguments, but the modulus always returns positive results for negative input, whereas the remainder may give negative results.

-21 mod 4 is 3 because -21 + 4 x 6 is 3.

But -21 divided by 4 gives -5 with a remainder of -1.

Import Files in Protocol Buffer

2020 Nov 20

A Java project contains protocol buffer files (to define gRPC API). It uses protobuf-gradle-plugin to compile the protocol buffer files. By default this Gradle plugin reads protocol buffer source files from the src/main/proto/ directory.

The structure of the src/main/proto/ is like below.

.
├── foo
│   ├── bar
│   │   └── def.proto
│   ├── common
│   │   └── common.proto

Assume in the foo/bar/def.proto file, it needs to import message types defined in the foo/common/common.proto file. To do it, in the foo/bar/def.proto file, add lines like below.

syntax = "proto3";

package foo.bar;

import "foo/common/common.proto";

message BarType {
    // use full qualified name to refer to CommonTypeA, not just `CommonTypeA`
    // (assume the package name is "foo.common")
    foo.common.CommonTypeA a = 1; 
}

A import statement using relative path like import "../../common.proto"; does not work. Because the protocol compiler does not search files in the “upper” directories. It only searches in directories provided by the -I/--proto_path options or the directory in which the compiler was invoked if the -I option not given.

The protocol compiler searches for imported files in a set of directories specified on the protocol compiler command line using the -I/–proto_path flag. If no flag was given, it looks in the directory in which the compiler was invoked. In general you should set the –proto_path flag to the root of your project and use fully qualified names for all imports.

The protobuf-gradle-plugin sets src/main/proto/ to the -I option when invokes protoc. It can be observed from the Gradle info log, ./gradlew build -i. Therefore, for the statement import "foo/common/common.proto";, the protoc will successfully find the imported file under the foo/common directory of the src/main/proto/.

(Usually IntelliJ IDEA knows that the src/main/proto/ is the path to search imported files. If not, add “Custom include paths” in Preferences -> Protobuf Support.)

More documentation about the --proto_path=IMPORT_PATH option.

IMPORT_PATH specifies a directory in which to look for .proto files when resolving import directives. If omitted, the current directory is used. Multiple import directories can be specified by passing the --proto_path option multiple times; they will be searched in order. -I=_IMPORT_PATH_ can be used as a short form of --proto_path.

The jq Command Examples

2020 Nov 17

Some jq examples. All quotes are from the jq manual.

A sample json file is as below.

$ cat sample.json
{
    "apple-weight": [
        60
    ],
    "orange-weight": [
        50
    ],
    "banana-weight": [
        20,
        35
    ]
}

The keys builtin function

$ jq '. | keys' sample.json
[
  "apple-weight",
  "banana-weight",
  "orange-weight"
]

The builtin function keys, when given an object, returns its keys in an array.

Array/Object Value Iterator: .[]

$ jq '. | keys[]' sample.json
"apple-weight"
"banana-weight"
"orange-weight"

If you use the .[index] syntax, but omit the index entirely, it will return all of the elements of an array.

Running .[] with the input [1,2,3] will produce the numbers as three separate results, rather than as a single array.

You can also use this on an object, and it will return all the values of the object.

exp as $x | ... and String Interpolation

$ jq '. | keys[] as $k | "\($k), \(.[$k])"' sample.json
"apple-weight, [60]"
"banana-weight, [20,35]"
"orange-weight, [50]"

The expression exp as $x | ... means: for each value of expression exp, run the rest of the pipeline with the entire original input, and with $x set to that value. Thus as functions as something of a foreach loop.

The '. | keys[] as $k | "\($k), \(.[$k])"' means for each value of . | keys[], which are “apple-weight”, “banana-weight” and “orange-weight”, run the rest of pipeline, i.e. "\($k), \(.[$k])", which is string interpolation.

String interpolation - (foo)

More Complex Expression in String interpolation

$ jq '. | keys[] as $k | "\($k), \(.[$k][0])"  ' sample.json
"apple-weight, 60"
"banana-weight, 20"
"orange-weight, 50"

\(.[$k][0]) is replaced with the value of .["apple-weight"][0].

Array construction: []

$ jq -c '. | keys[] as $k | [$k, .[$k][0]] ' sample.json
["apple-weight",60]
["banana-weight",20]
["orange-weight",50]

You can use it to construct an array out of a known quantity of values (as in [.foo, .bar, .baz])

Array construction: []

$ jq  '[ . | keys[] as $k | [$k, .[$k][0]] ] ' sample.json
[
  [
    "apple-weight",
    60
  ],
  [
    "banana-weight",
    20
  ],
  [
    "orange-weight",
    50
  ]
]

If you have a filter X that produces four results, then the expression [X] will produce a single result, an array of four elements.

The . | keys[] as $k | [$k, .[$k][0]] produces three results, enclosing it with [] produces an array of these three elements.

Object Construction: {}

$ jq  ' . | keys[] as $k | {category: $k, weight: .[$k][0]}  ' sample.json
{
  "category": "apple-weight",
  "weight": 60
}
{
  "category": "banana-weight",
  "weight": 20
}
{
  "category": "orange-weight",
  "weight": 50
}

Object Construction: {} and Array construction: []

$ jq  '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] ' sample.json
[
  {
    "category": "apple-weight",
    "weight": 60
  },
  {
    "category": "banana-weight",
    "weight": 20
  },
  {
    "category": "orange-weight",
    "weight": 50
  }
]

The sort function

$ jq  '[ . | keys[] as $k | [$k, .[$k][0]] ] | sort ' sample.json
[
  [
    "apple-weight",
    60
  ],
  [
    "banana-weight",
    20
  ],
  [
    "orange-weight",
    50
  ]
]

The sort functions sorts its input, which must be an array.

Values are sorted in the following order: null, false, true, …

The [ . | keys[] as $k | [$k, .[$k][0]] ] is an array of three elements, each of which itself is an array. These three elements, according to the manual, are sorted “in lexical order”.

The sort_by function

$ jq  '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) ' sample.json
[
  {
    "category": "banana-weight",
    "weight": 20
  },
  {
    "category": "orange-weight",
    "weight": 50
  },
  {
    "category": "apple-weight",
    "weight": 60
  }
]

sort_by(foo) compares two elements by comparing the result of foo on each element.

The [ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] is an array of three objects. The | sort_by(.weight) sorts these three objects by comparing their weight property. The final result is still an array, but sorted.

Select/Filter

$ jq  '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | .[] | select(.weight >= 50) ' sample.json
{
  "category": "orange-weight",
  "weight": 50
}
{
  "category": "apple-weight",
  "weight": 60
}

The function select(foo) produces its input unchanged if foo returns true for that input, and produces no output otherwise.

The [ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) produces a sorted array. The following .[], i.e. array iterator, feeds select(.weight >= 50) with three elements of that array. The final result is elements whose weight is equal or larger than 50.

The command below, using map, produces the same result.

$ jq  '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | map(select(.weight >= 50)) ' sample.json
[
  {
    "category": "orange-weight",
    "weight": 50
  },
  {
    "category": "apple-weight",
    "weight": 60
  }
]

Multiple Conditions in select

$ jq  '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | .[] | select( (.weight >= 50) and (.weight < 60))  ' sample.json
{
  "category": "orange-weight",
  "weight": 50
}
CLI

Avoid Wrong Tracking When Create Branches in Git

2020 Sep 8

Just made a mistake to push commits to a wrong remote branch. Below is the detail.

  1. Need to create a new branch br-x, which needs to be based on the newest remote dev branch.
  2. Run git fetch to get newest change from the remote.
  3. Run git checkout -b br-x origin/dev to create branch br-x.
  4. Change and commit files in branch br-x.
  5. Run git push origin -u br-x to push commits to the remote.

In step 3, the origin/dev is used to as the “start-point” of the new br-x branch. As per git branch --help,

When a local branch is started off a remote-tracking branch, Git sets up the branch (specifically the branch.<name>.remote and branch.<name>.merge configuration entries) so that git pull will appropriately merge from the remote-tracking branch. This behavior may be changed via the global branch.autoSetupMerge configuration flag.

In other words, the git checkout -b br-x origin/dev not only create a new br-x branch, but also let the br-x track the remote dev branch. As a result, in step 5, the git push origin -u br-x doesn’t push commits into a same-name remote branch. However, it pushes commits into the remote dev branch, which the local br-x is tracking since its creation. The remote dev branch is accidentally modified. 😞

To avoid it, one method is use the local dev branch as the “start-point” in step 3. Consider the local dev may be behind the remote dev. You may have to switch to the local dev and git pull to update it first. Another method is using --no-track option, i.e. git checkout -b --no-track br-x origin/dev.

A more thorough method is using git config --global branch.autoSetupMerge false to change the default behavior of Git. When branch.autoSetupMerge is false, when create a branch, Git will not setup its tracking branch even if the “start-point” is a remote-tracking branch. From more details search “branch.autoSetupMerge” in git config --help.

For what is “remote-tracking” branch, check this link. Simply put,

Remote-tracking branch names take the form <remote>/<branch>.

Enable Junit 5 Test Cases Based on Java System Properties

2020 Aug 26

JUnit 5 supports disabling test cases via the @Disabled annotation. Sometimes, you may want some cases to be conditionally disabled or enabled. JUnit 5 provides many annotations to support conditional test execution, like @EnabledOnOs(MAC), @DisabledOnJre(JAVA_8) and @EnabledIfEnvironmentVariable.

Let’s say, a test case only runs when a Java system property is set to a certain value. To do it, add @EnabledIfSystemProperty onto the test case.

@Test
@EnabledIfSystemProperty(named = "foo.enabled", matches = "on")
void fooTest() {
}

The fooTest runs only when the foo.enabled system property is set to on. To run this test case via Gradle, type the command below.

./gradlew test --tests '*.fooTest' -i -Dfoo.enabled=on

To make the JVM running the test case know the Java system property passed to the JVM running Gradle, add below lines in the build.gradle file.

test {
    systemProperty "foo.enabled", System.getProperty("foo.enabled")
}

← Previous