Git Command Examples
- Checkout a Remote Branch in Local
- Print the Short Commit SHA1 of a Git Tag
- Fix the
^M
Character Shown ingit diff
Result - Prune stale remote branches in local repository
Checkout a Remote Branch in Local
$ git checkout --track origin/master
The above command creates a local branch with the same name, i.e. master
, as the remote branch,
and let the local branch track the remote one.
“Tracking” means when run git push
, Git knows where it pushes changes into.
Some notes from git checkout --help
,
As a convenience, –track without -b implies branch creation
-t, –track
When creating a new branch, set up “upstream” configuration.
If no -b option is given, the name of the new branch will be derived from the remote-tracking branch
Print the Short Commit SHA1 of a Git Tag
# Assume the Git tag is "0.1.0"
$ git rev-list -n 1 0.1.0 --pretty=format:"%h" | tail -1
c363005
The tag “0.1.0” points to the commit c363005
.
Use %H
if the full SHA1 is needed.
(Search “placeholder” in git show --help
for the document of format:<string>
.)
Add --abbrev
option, like --abbrev=8
, if a fixed width SHA1 is needed.
Fix the ^M
Character Shown in git diff
Result
Sometimes, when run git diff
, it prints ^M
at the end of some lines.
The ^M
character represents a carriage-return character, i.e. CRLF
, the new line character in Windows.
You may see ^M
before, if you use Vim to edit some files coming for Windows/DOS.
Seeing ^M
in the git diff
result means the same line was ended with CRLF
but now
with LF
, or vice versa.
Usually, a Git repository should be configured in a way that all text files committed
into the repository end with LF
, while files checked out end the local machine specific
endings, i.e. LF
in Unix and CRLF
in Windows machines. So that ^M
would not be seen
in git diff
. To fix a repository’s configuration, add a .gitattributes
file with content
like
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto
# Declare files that will always have CRLF line endings on checkout.
*.bat text eol=crlf
“Renormalize” all the files with updated configuration.
$ git stash -u
$ git add --renormalize .
$ git status
$ git commit -m "Normalize line endings"
$ git stash pop
See this GitHub doc, Configuring Git to handle line endings, for more details.
Prune stale remote branches in local repository
As time goes by, local repository may have many remote branches, which were actually
deleted in remote repository. For example, in GitLab someone’s feature branch is usually
deleted while it’s merged by a merge request. However, these origin/feat-x
,
origin/feat-y
branches are kept in your local repository since they’re fetched.
To delete these stale remote branches in local all at once, run
$ git remote prune origin
# Or,
$ git fetch --prune
It’s said in git remote --help
,
might even prune local tags that haven’t been pushed there.
So it’s a good idea to run above commands with --dry-run
option first.
Delete one remote branch by git branch -r -d origin/feat-x
.
The jq Command Examples
- The
keys
builtin function - Array/Object Value Iterator: .[]
exp as $x | ...
and String Interpolation- More Complex Expression in String interpolation
- Array construction:
[]
- Object Construction:
{}
- Object Construction:
{}
and Array construction:[]
- The
sort
function - The
sort_by
function - Select/Filter
- Multiple Conditions in
select
Some jq
examples. All quotes are from the jq manual.
A sample json file is as below.
$ cat sample.json
{
"apple-weight": [
60
],
"orange-weight": [
50
],
"banana-weight": [
20,
35
]
}
The keys
builtin function
$ jq '. | keys' sample.json
[
"apple-weight",
"banana-weight",
"orange-weight"
]
The builtin function keys, when given an object, returns its keys in an array.
Array/Object Value Iterator: .[]
$ jq '. | keys[]' sample.json
"apple-weight"
"banana-weight"
"orange-weight"
If you use the .[index] syntax, but omit the index entirely, it will return all of the elements of an array.
Running .[] with the input [1,2,3] will produce the numbers as three separate results, rather than as a single array.
You can also use this on an object, and it will return all the values of the object.
exp as $x | ...
and String Interpolation
$ jq '. | keys[] as $k | "\($k), \(.[$k])"' sample.json
"apple-weight, [60]"
"banana-weight, [20,35]"
"orange-weight, [50]"
The expression
exp as $x | ...
means: for each value of expression exp, run the rest of the pipeline with the entire original input, and with $x set to that value. Thus as functions as something of a foreach loop.
The '. | keys[] as $k | "\($k), \(.[$k])"'
means for each value of . | keys[]
, which are “apple-weight”, “banana-weight” and “orange-weight”,
run the rest of pipeline, i.e. "\($k), \(.[$k])"
, which is string interpolation.
String interpolation - (foo)
More Complex Expression in String interpolation
$ jq '. | keys[] as $k | "\($k), \(.[$k][0])" ' sample.json
"apple-weight, 60"
"banana-weight, 20"
"orange-weight, 50"
\(.[$k][0])
is replaced with the value of .["apple-weight"][0]
.
Array construction: []
$ jq -c '. | keys[] as $k | [$k, .[$k][0]] ' sample.json
["apple-weight",60]
["banana-weight",20]
["orange-weight",50]
$ jq '[ . | keys[] as $k | [$k, .[$k][0]] ] ' sample.json
[
[
"apple-weight",
60
],
[
"banana-weight",
20
],
[
"orange-weight",
50
]
]
If you have a filter X that produces four results, then the expression [X] will produce a single result, an array of four elements.
The . | keys[] as $k | [$k, .[$k][0]]
produces three results, enclosing it with []
produces an array of these three elements.
Object Construction: {}
$ jq ' . | keys[] as $k | {category: $k, weight: .[$k][0]} ' sample.json
{
"category": "apple-weight",
"weight": 60
}
{
"category": "banana-weight",
"weight": 20
}
{
"category": "orange-weight",
"weight": 50
}
Object Construction: {}
and Array construction: []
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] ' sample.json
[
{
"category": "apple-weight",
"weight": 60
},
{
"category": "banana-weight",
"weight": 20
},
{
"category": "orange-weight",
"weight": 50
}
]
The sort
function
$ jq '[ . | keys[] as $k | [$k, .[$k][0]] ] | sort ' sample.json
[
[
"apple-weight",
60
],
[
"banana-weight",
20
],
[
"orange-weight",
50
]
]
The sort functions sorts its input, which must be an array.
Values are sorted in the following order:
null
,false
,true
, …
The [ . | keys[] as $k | [$k, .[$k][0]] ]
is an array of three elements, each of which itself is an array.
These three elements, according to the manual, are sorted “in lexical order”.
The sort_by
function
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) ' sample.json
[
{
"category": "banana-weight",
"weight": 20
},
{
"category": "orange-weight",
"weight": 50
},
{
"category": "apple-weight",
"weight": 60
}
]
sort_by(foo) compares two elements by comparing the result of foo on each element.
The [ . | keys[] as $k | {category: $k, weight: .[$k][0]} ]
is an array of three objects.
The | sort_by(.weight)
sorts these three objects by comparing their weight
property.
The final result is still an array, but sorted.
Select/Filter
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | .[] | select(.weight >= 50) ' sample.json
{
"category": "orange-weight",
"weight": 50
}
{
"category": "apple-weight",
"weight": 60
}
The function select(foo) produces its input unchanged if foo returns true for that input, and produces no output otherwise.
The [ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight)
produces a sorted array.
The following .[]
, i.e. array iterator, feeds select(.weight >= 50)
with three elements of that array.
The final result is elements whose weight
is equal or larger than 50
.
The command below, using map
, produces the same result.
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | map(select(.weight >= 50)) ' sample.json
[
{
"category": "orange-weight",
"weight": 50
},
{
"category": "apple-weight",
"weight": 60
}
]
Multiple Conditions in select
$ jq '[ . | keys[] as $k | {category: $k, weight: .[$k][0]} ] | sort_by(.weight) | .[] | select( (.weight >= 50) and (.weight < 60)) ' sample.json
{
"category": "orange-weight",
"weight": 50
}
grep Command Examples
- Stop after first match
- Print only filename if match
- Find unmatched files
- Show line number of matched lines
- Don’t output filename when grep multiple files
- Search in “binary” files
- Search in directories
- Ignore case when search
- The pattern to search begins with - (hyphen)
- Use pattern file
- Print only count of matching lines
First, grep –help lists most of its options, which is the go-to command for most grep questions.
Like most CLI tools, options of grep can be combined.
For example, -io
is same as -i -o
, -A3
is same as -A 3
.
Also, the options can be anywhere in the command.
$ grep hello a.txt -i --color
Stop after first match
$ grep -m 1 search-word file
-m, –max-count=NUM stop after NUM matches
Only print the 1000th match.
$ grep -m1000 search-word file | tail -n1
Print only filename if match
$ grep -l search-word *.txt
-l, –files-with-matches print only names of FILEs containing matches
It’s useful when you grep lots of files and only care about names of matched files.
Find unmatched files
-L, –files-without-match print only names of FILEs containing no match
-L
is the opposite of -l
option.
It outputs the files which don’t contain the word to search.
$ grep -L search-word *.txt
Show line number of matched lines
$ grep -n search-word file
-n, –line-number print line number with output lines
Don’t output filename when grep multiple files
When grep multiple files, by default filename is included in the output. Like,
$ grep hello *.txt
a.txt:hello
b.txt:hello
Use -h
to not output filenames.
$ grep -h hello *.txt
hello
hello
-h, –no-filename suppress the file name prefix on output
Search in “binary” files
Sometimes, a text file may contains a few non-printable characters, which makes grep consider it as a “binary” file. grep doesn’t print matched lines for a “binary” file.
$ printf "hello\000" > test.txt
$ grep hello test.txt
Binary file test.txt matches
Use -a
to let grep know the file should be seen as a “text” file.
$ grep -a hello test.txt
hello
-a, –text equivalent to –binary-files=text
Search in directories
-r, –recursive like –directories=recurse
-R, –dereference-recursive likewise, but follow all symlinks
Without specifying a directory, grep searches in current working directory by default.
$ grep -R hello
b.md:hello
a.txt:hello
Specify directories.
$ grep -R hello tmp/ tmp2/
tmp/b.md:hello
tmp/a.txt:hello
tmp2/b.md:hello
tmp2/a.txt:hello
–include=FILE_PATTERN search only files that match FILE_PATTERN
Use --include
to tell grep the pattern of the filenames you’re interested in.
$ grep -R hello --include="*.md"
b.md:hello
Ignore case when search
-i, –ignore-case ignore case distinctions
$ grep -i Hello a.txt
hello
HELLO
The pattern to search begins with - (hyphen)
$ grep -- -hello a.txt
-hello
To know what -L
option does.
$ grep --help | grep -- -L
-L, --files-without-match print only names of FILEs containing no match
Use pattern file
-f FILE, –file=FILE Obtain patterns from FILE, one per line. If this option is used multiple times or is combined with the -e (–regexp) option, search for all patterns given. The empty file contains zero patterns, and therefore matches nothing.
$ cat test.txt
111
222
333
$ cat patterns.txt
111
333
$ grep -f patterns.txt test.txt
111
333
NOTE: Do not put an empty line, i.e. a line with \n
only, in the pattern file.
Otherwise, the pattern file would match every line, since every line contains \n
as
its last character. It’s easy to make a mistake to put empty lines in the end of the
pattern file.
Print only count of matching lines
Use -c
, or --count
to print only count of matching lines.
For example, below command line is to find out the count of <OrderLine>
tag in files of current directory.
$ grep "<OrderLine>" -c -R .
It outputs like below.
./order-1.xml:3
./order-2.xml:9
./order-3.xml:1
To sort the output, use command like below.
$ grep "<OrderLine>" -c -R . | sort -t : -k 2
./order-3.xml:1
./order-1.xml:3
./order-2.xml:9