- 1. Scripts
- 1.1. AsciiDoc
- 1.2. General
- 1.3. Certificates
- 1.3.1. copy_ca_based_cert
- 1.3.2. copy_ca_root_cert
- 1.3.3. create_ca
- 1.3.4. create_ca_based_cert
- 1.3.5. create_self_signed_cert
- 1.3.6. delete_ca
- 1.3.7. delete_ca_based_cert
- 1.3.8. delete_self_signed_cert
- 1.3.9. renew_ca_based_cert
- 1.3.10. renew_self_signed_cert
- 1.3.11. verify_ca_based_cert
- 1.3.12. verify_self_signed_cert
- 1.4. Docker
- 1.5. Git
- 1.6. GitHub
- 1.7. Gradle
- 1.8. Java
- 1.9. Keycloak
- 1.10. LaTeX
- 1.11. Node.js
- 1.12. PDF
- 1.13. Web
- 1.13.1. compress_broli
- 1.13.2. compress_gzip
- 1.13.3. compress_zstd
- 1.13.4. create_build_info_js
- 1.13.5. create_build_info_json
- 1.13.6. create_build_info_ts
- 1.13.7. minify_css
- 1.13.8. minify_gif
- 1.13.9. minify_html
- 1.13.10. minify_jpeg
- 1.13.11. minify_json
- 1.13.12. minify_json_tags
- 1.13.13. minify_png
- 1.13.14. minify_robots
- 1.13.15. minify_svg
- 1.13.16. minify_traffic_advice
- 1.13.17. minify_webmanifest
- 1.13.18. minify_xml
- 2. Functions
- 3. License
- 4. Contribution
- 5. Code of Conduct
- 6. Development Environment Setup
- 6.1. Installation
- 6.1.1. brotli
- 6.1.2. curl
- 6.1.3. Docker
- 6.1.4. easyrsa
- 6.1.5. exiftool
- 6.1.6. gifsicle
- 6.1.7. GitHub CLI
- 6.1.8. hadolint
- 6.1.9. JDK
- 6.1.10. jpegoptim
- 6.1.11. jq
- 6.1.12. Gradle
- 6.1.13. Node Version Manager
- 6.1.14. optipng
- 6.1.15. oxipng
- 6.1.16. qpdf
- 6.1.17. shellcheck
- 6.1.18. shfmt
- 6.1.19. yamllint
- 6.1.20. zstd
- 6.2. IDE
- 6.1. Installation
Miscellaneous shell-related scripts and functions.
This section contains scripts related to AsciiDoc:
- asciidoc_html_build
-
typeset the documents of a given directory into HTML
- asciidoc_pdf_build
-
typeset the documents of a given directory into PDFs
This script will typeset the documents of a given source directory into HTML.
The following parameters are optional:
f
-
delete the output directory before typesetting
n
-
turn caching off
o
-
the output directory (
$PWD/build
if not given) s
-
the source the directory (
$PWD/src
if not given)
ℹ️
|
Docker needs to be installed. |
ℹ️
|
This is useful for AsciiDoc includes. |
ℹ️
|
This scripts configures Asciidoctor to not use webfonts, use local FontAwesome assets, embed images, and inline CSS. |
ℹ️
|
If you want to use custom fonts, this script assumes that the |
đź’ˇ
|
Depending on your use case, you might want to use Antora instead of this script. |
$ tree --noreport -I scripts
.
└── src
└── index.adoc
$ scripts/asciidoc/asciidoc_html_build.sh
$ tree --noreport -I scripts
.
├── build
│ └── index.html (1)
└── src
└── index.adoc
$ tree --noreport -a /tmp/example
/tmp/example
└── src
├── _includes
│ └── footer.adoc
├── a
│ ├── b
│ │ └── sub.adoc
│ └── dir.adoc
├── css
│ └── font-awesome.css (2)
├── docinfo
│ └── docinfo.html (3)
├── fonts
│ └── fontawesome-webfont-4.7.0.woff2 (2)
└── index.adoc
$ cat /tmp/example/src/docinfo/docinfo.html (3)
<style>
pre.rouge .hll { (4)
background-color: #ffffe0;
display: block;
}
pre.rouge .hll * { (4)
background-color: initial;
}
</style>
$ cat /tmp/example/src/a/b/sub.adoc
= 1. sub
:source-highlighter: rouge (4)
:rouge-style: github
:docinfo: shared (3)
:docinfodir: ../../docinfo (3)
[NOTE] (2)
====
Test
====
[plantuml] (5)
....
@startuml
!include https://raw.githubusercontent.com/plantuml-stdlib/C4-PlantUML/master/C4_Context.puml
Person(p, "Support")
System(s, "Test System")
Rel(p, s, "Uses", "https")
@enduml
....
[source,shell,highlight=2..3;5] (4)
....
A
B
C
D
E
F
G
....
includes::../../_includes/footer.adoc[]
$ scripts/asciidoc/asciidoc_html_build.sh -s /tmp/example/src -o /tmp/example/out
$ tree --noreport -a /tmp/example -I src
/tmp/example
└── out (6)
├── a
│ ├── b
│ │ ├── .asciidoctor (8)
│ │ │ └── diagram
│ │ │ └── diag-plantuml-md5-757a0ec403d52693302a4f18fd7ec102.png.cache
│ │ ├── css
│ │ │ └── font-awesome.css (7)
│ │ ├── diag-plantuml-md5-757a0ec403d52693302a4f18fd7ec102.png (8)
│ │ ├── fonts
│ │ │ └── fontawesome-webfont-4.7.0.woff2 (7)
│ │ └── sub.html
│ ├── css
│ │ └── font-awesome.css (7)
│ ├── dir.html
│ └── fonts
│ └── fontawesome-webfont-4.7.0.woff2 (7)
├── css
│ └── font-awesome.css (7)
├── fonts
│ └── fontawesome-webfont-4.7.0.woff2 (7)
└── index.html
$ scripts/asciidoc/asciidoc_html_build.sh -s /tmp/example/src -o /tmp/example/out -f -n
$ tree --noreport -a /tmp/example -I src
/tmp/example
└── out (9)
├── a
│ ├── b
│ │ ├── css
│ │ │ └── font-awesome.css
│ │ ├── fonts
│ │ │ └── fontawesome-webfont-4.7.0.woff2
│ │ └── sub.html
│ ├── css
│ │ └── font-awesome.css
│ ├── dir.html
│ └── fonts
│ └── fontawesome-webfont-4.7.0.woff2
├── css
│ └── font-awesome.css
├── fonts
│ └── fontawesome-webfont-4.7.0.woff2
└── index.html
-
the typeset HTML
-
admonitions need Font Awesome
-
a document with an PlantUML diagram
-
notice there is no
footer.html
in an_includes
directory -
Asciidoctor currently does not support a nested set of documents well
-
the files of the diagram cache
-
the output directory has been cleaned (
-f
) and no files for the cache have been created (-n
)
$ cd scripts/asciidoc/example-html
$ ../asciidoc_html_build.sh
⇒ build/main.html
This script will typeset the documents of a given source directory into PDFs.
The following parameters are optional:
f
-
delete the output directory before typesetting
n
-
turn caching off
o
-
the output directory (
$PWD/build
if not given) s
-
the source the directory (
$PWD/src
if not given)
ℹ️
|
Docker needs to be installed. |
ℹ️
|
This is useful for AsciiDoc includes. |
ℹ️
|
If you want to use a custom theme, this script assumes that the theme is called If you want to use custom fonts, this script assumes that the You can configure the font used by PlantUML with the |
$ tree --noreport -I scripts
.
└── src
└── index.adoc
$ scripts/asciidoc/asciidoc_pdf_build.sh
$ tree --noreport -I scripts
.
├── build
│ └── index.pdf (1)
└── src
└── index.adoc
$ tree --noreport -a /tmp/example
/tmp/example
└── src
├── _includes
│ └── footer.adoc
├── a
│ ├── b
│ │ └── sub.adoc
│ └── dir.adoc
├── fonts (2)
│ ├── JetBrainsMono-Bold.ttf
│ ├── JetBrainsMono-BoldItalic.ttf
│ ├── JetBrainsMono-Italic.ttf
│ ├── JetBrainsMono-Regular.ttf
│ ├── Lora-Bold.ttf
│ ├── Lora-BoldItalic.ttf
│ ├── Lora-Italic.ttf
│ ├── Lora-Regular.ttf
│ └── NotoEmoji-Regular.ttf
├── index.adoc
└── themes
├── basic-theme.yml (3)
└── basic-plantuml.cfg (4)
$ cat /tmp/example/src/themes/basic-theme.yml
---
# https://github.com/asciidoctor/asciidoctor-pdf/blob/main/docs/theming-guide.adoc
extends: default
font:
catalog:
JetBrainsMono:
normal: JetBrainsMono-Regular.ttf (5)
italic: JetBrainsMono-Italic.ttf
bold: JetBrainsMono-Bold.ttf
bold_italic: JetBrainsMono-BoldItalic.ttf
Lora:
normal: Lora-Regular.ttf
italic: Lora-Italic.ttf
bold: Lora-Bold.ttf
bold_italic: Lora-BoldItalic.ttf
NotoEmoji: NotoEmoji-Regular.ttf
fallbacks: [NotoEmoji]
base:
font-family: Lora
codespan:
font-family: JetBrainsMono
code:
font-family: JetBrainsMono
kbd:
font-family: JetBrainsMono
$ cat /tmp/example/src/themes/basic-plantuml.cfg
skinparam defaultFontName Lora (6)
$ cat /tmp/example/src/a/b/sub.adoc
= Sub
[plantuml] (7)
....
@startuml
!include https://raw.githubusercontent.com/plantuml-stdlib/C4-PlantUML/master/C4_Context.puml
Person(p, "Support")
System(s, "Test System")
Rel(p, s, "Uses", "https")
@enduml
....
includes::../../_includes/footer.adoc[]
$ scripts/asciidoc/asciidoc_pdf_build.sh -s /tmp/example/src -o /tmp/example/out
$ tree --noreport -a /tmp/example -I src
/tmp/example
└── out (8)
├── a
│ ├── b
│ │ ├── .asciidoctor (9)
│ │ │ └── diagram
│ │ │ └── diag-plantuml-md5-647767ca39d0b7ada6e2164960017d01.png.cache
│ │ ├── diag-plantuml-md5-647767ca39d0b7ada6e2164960017d01.png (9)
│ │ └── sub.pdf
│ └── dir.pdf
├── diag-plantuml-md5-647767ca39d0b7ada6e2164960017d01.png (9)
└── index.pdf
$ scripts/asciidoc/asciidoc_pdf_build.sh -s /tmp/example/src -o /tmp/example/out -f -n
$ tree --noreport -a /tmp/example -I src
/tmp/example
└── out (10)
├── a
│ ├── b
│ │ └── sub.pdf
│ └── dir.pdf
└── index.pdf
-
the typeset PDF
-
custom fonts
-
custom theme
-
PlantUML customization
-
the custom theme uses the custom fonts
-
PlantUML uses a custom font
-
a document with an PlantUML diagram
-
notice there is no
footer.pdf
in an_includes
directory -
the files of the diagram cache
-
the output directory has been cleaned (
-f
) and no files for the cache have been created (-n
)
$ cd scripts/asciidoc/example-pdf
$ ../asciidoc_pdf_build.sh
⇒ build/main.pdf
Typeset PDF: main.pdf
-
$ exiftool build/main.pdf ExifTool Version Number : 12.76 File Name : main.pdf Directory : build File Size : 132 kB File Modification Date/Time : 2024:09:16 02:02:49+02:00 File Access Date/Time : 2024:09:16 02:02:51+02:00 File Inode Change Date/Time : 2024:09:16 02:02:49+02:00 File Permissions : -rw-r--r-- File Type : PDF File Type Extension : pdf MIME Type : application/pdf PDF Version : 1.4 Linearized : No Page Count : 10 Page Mode : UseOutlines Title : AsciiDoc Showcase Author : Sebastian Davids Creator : Sebastian Davids Producer : Asciidoctor PDF 2.3.18, based on Prawn 2.4.0 Modify Date : 2024:09:15 23:58:24+00:00 Create Date : 2024:09:16 00:02:45+00:00 $ ../../pdf/pdf_remove_metadata.sh build/main.pdf $ exiftool build/main.pdf ExifTool Version Number : 12.76 File Name : main.pdf Directory : build File Size : 132 kB File Modification Date/Time : 2024:09:16 02:03:43+02:00 File Access Date/Time : 2024:09:16 02:03:45+02:00 File Inode Change Date/Time : 2024:09:16 02:03:43+02:00 File Permissions : -rw------- File Type : PDF File Type Extension : pdf MIME Type : application/pdf PDF Version : 1.4 Linearized : Yes Page Mode : UseOutlines Page Count : 10
This section contains generally useful scripts:
- copy_shell_scripts
-
copy
*.sh
files from a source to a destination directory - counter
-
create a counter
- create_timestamp_file
-
create a file with a timestamp
- fix_permissions
-
fix the directory, file, and script permissions in the given directory
- loop
-
repeat a script repeatedly
- hash_filename
-
insert a hash into a filename
- shellscript_check
-
shellcheck
*.sh
files in the given directory - shellscript_format
-
format the shell script sources in the given directory
- shellscript_format_check
-
check the formatting of shell script sources in the given directory
This script will copy the *.sh
files in the given directory ($PWD
if not given) and its subdirectories to the destination directory.
You are prompted to overwrite existing files.
The copied files will have their permissions set to 700
.
On macOS, all extended attributes of the copied files will be cleared.
The following parameter is required:
d
-
the destination directory; the given directory will be created if it does not exit yet
The following parameters are optional:
f
-
overwrite existing files without prompt
g
-
add group read and execute permissions; can be combined with
-o
o
-
add other read and execute permissions; can be combined with
-g
ℹ️
|
This script will error out when two or more scripts with the same name are found in the given source directory ( a.sh
/tmp/src/sub/a.sh
/tmp/src/a.sh In that case, you need to rename or delete one of the scripts before executing this script again. |
đź’ˇ
|
Copy the scripts to a $PATH directory. Example zsh setup: $ mkdir ~/.local/scripts ~/.zshrc
export PATH="${HOME}/.local/scripts${PATH:+:${PATH}}" |
$ scripts/general/copy_shell_scripts.sh -d ~/.local/scripts
$ scripts/general/copy_shell_scripts.sh -d /tmp/dst -s /tmp/src
The following script names are not unique:
a.sh
/tmp/src/sub/a.sh
/tmp/src/a.sh
Make the file names unique and execute this script again.
$ tree --noreport -p /tmp/src
[drwxrwxrwx] /tmp/src
├── [-rwxrwxrwx] a.sh
└── [drwxrwxrwx] sub
├── [-rw-r--r--] a.sh
└── [-rwxrwxrwx] b.sh
$ rm /tmp/src/sub/a.sh (1)
$ scripts/general/copy_shell_scripts.sh -d /tmp/dst -s /tmp/src
$ tree --noreport -p /tmp/dst
[drwx------] /tmp/dst
├── [-rwx------] a.sh
└── [-rwx------] b.sh
$ scripts/general/copy_shell_scripts.sh -d /tmp/dst -s /tmp/src -g
The following files will be overwritten:
a.sh
b.sh
Do you really want to irreversibly overwrite them (Y/N)? Y
$ tree --noreport -p /tmp/dst
[drwxr-x---] /tmp/dst
├── [-rwxr-x---] a.sh
└── [-rwxr-x---] b.sh
$ scripts/general/copy_shell_scripts.sh -d /tmp/dst -s /tmp/src -o -f (2)
$ tree --noreport -p /tmp/dst
[drwx---r-x] /tmp/dst
├── [-rwx---r-x] a.sh
└── [-rwx---r-x] b.sh
$ scripts/general/copy_shell_scripts.sh -d /tmp/dst -s /tmp/src -g -o -f
$ tree --noreport -p /tmp/dst
[drwxr-xr-x] /tmp/dst
├── [-rwxr-xr-x] a.sh
└── [-rwxr-xr-x] b.sh
-
resolve the situation by deleting one of the scripts
-
also add
-f
so we do not get asked if we want to overwrite the files
This script will create a counter with the given name.
The optional second positive integer parameter will stop the counter when the current count is equal or larger than the given argument.
Invoking this script will print the current count to stdout unless the counter has been removed.
The exit code of the script will be 100
when the count has been increased or 0
when the counter has been removed.
The count is persisted in a file in a temporary directory or COUNTER_DIR
if set in the environment.
#!/usr/bin/env sh
scripts/general/counter.sh toggle 1 1>/dev/null
if [ $? -eq 100 ]; then
echo 'on'
else
echo 'off'
fi
#!/usr/bin/env sh
COUNTER_DIR="${XDG_STATE_HOME:=${HOME}}/retry" scripts/general/counter.sh retry 3 1>/dev/null
if [ $? -ne 100 ]; then
echo 'tried enough times' >&2
exit 50
fi
$ scripts/general/counter.sh my-counter 2
1
$ echo $?
100
$ scripts/general/counter.sh my-counter 2
2
$ echo $?
100
$ scripts/general/counter.sh my-counter 2
$ echo $?
0
$ ./toggle.sh
on
$ ./toggle.sh
off
$ ./toggle.sh
on
$ mkdir -p "${XDG_STATE_HOME:=${HOME}}/retry"
$ ./retry.sh
$ ls "${XDG_STATE_HOME:=${HOME}}/retry"
counter-retry
$ cat /home/example/.local/state/retry/counter-retry
1
$ ./retry.sh
$ cat /home/example/.local/state/retry/counter-retry
2
$ ./retry.sh
$ cat /home/example/.local/state/retry/counter-retry
3
$ ./retry.sh
tried enough times
$ ls "${XDG_STATE_HOME:=${HOME}}/retry"
$ rm -rf "${XDG_STATE_HOME:=${HOME}}/retry"
-
$ scripts/general/loop.sh 1 0 scripts/general/counter.sh my-counter 5 12345
This script will create a file with the given name; the content will be the RFC 3339 timestamp of the file’s creation, e.g.:
2024-01-16T16:33:12Z
$ scripts/general/create_timestamp_file.sh .timestamp
$ cat .timestamp
2024-02-19T10:37:02Z
This script will fix the directory, file, and script permissions in the given directory ($PWD
if not given) and its subdirectories.
The permissions will be fixed in the following way (if -u
or -g
are not used):
- directories
-
set to
700
- files
-
set to
600
- shell scripts (
*.sh
) -
set to
700
The following parameter is required:
d
-
the directory (
$PWD
if not given) in which the permissions will be fixed
The following parameters are optional:
g
-
also fix the group permissions (
770
/660
/770
); cannot be used together with-u
u
-
use the current process' umask to fix the permissions; cannot be used together with
-g
âť—
|
The permissions of the given directory ( |
ℹ️
|
If the given directory ( If you use Husky then the Husky base directory ( |
$ tree --noreport -p /tmp/example
[drwxrwxrwx] /tmp/example
├── [dr-xr-xr-x] a
│ ├── [dr-xr-xr-x] b
│ └── [----------] t.sh
├── [dr-xr-xr-x] c
├── [----------] s.sh
└── [----------] t
$ scripts/general/fix_permissions.sh -d /tmp/example
$ tree --noreport -p /tmp/example
[drwx------] /tmp/example
├── [drwx------] a
│ ├── [drwx------] b
│ └── [-rwx------] t.sh
├── [drwx------] c
├── [-rwx------] s.sh
└── [-rw-------] t
$ scripts/general/fix_permissions.sh -d /tmp/example -g
$ tree --noreport -p /tmp/example
[drwxrwx---] /tmp/example
├── [drwxrwx---] a
│ ├── [drwxrwx---] b
│ └── [-rwxrwx---] t.sh
├── [drwxrwx---] c
├── [-rwxrwx---] s.sh
└── [-rw-rw----] t
$ scripts/general/fix_permissions.sh -d /tmp/example -u
$ tree --noreport -p /tmp/example
[drwx------] /tmp/example
├── [drwx------] a
│ ├── [drwx------] b
│ └── [-rwx------] t.sh
├── [drwx------] c
├── [-rwx------] s.sh
└── [-rw-------] t
$ umask -S
u=rwx,g=,o=
$ mkdir /tmp/test && cd "$_"
$ git init
$ mkdir d && chmod -x d
$ touch f && chmod u+x f
$ touch s.sh
$ mkdir .githooks
$ touch .githooks/pre-commit && chmod u+x .githooks/pre-commit
$ git config core.hooksPath .githooks
$ mkdir -p node_modules/some-module
$ touch node_modules/some-module/some-script && chmod u+x node_modules/some-module/some-script
$ touch node_modules/some-module/some-script-without-execute-permission.sh
$ cd -
$ tree --noreport -p -a -I .git /tmp/test
[drwx------] /tmp/test
├── [drwx------] .githooks
│ └── [-rwx------] pre-commit
├── [drw-------] d
├── [-rwx------] f
├── [drwx------] node_modules
│ └── [drwx------] some-module
│ ├── [-rwx------] some-script
│ └── [-rw-------] some-script-without-execute-permission.sh
└── [-rw-------] s.sh
$ scripts/general/fix_permissions.sh -d /tmp/test
WARNING: The permissions in the directory '/private/tmp/test' will be fixed.
The following directories will be ignored:
/private/tmp/test/.githooks (1)
/private/tmp/test/node_modules (2)
Do you really want to irreversibly fix the permissions (Y/N)? y
$ tree --noreport -p -a -I .git /tmp/test
[drwx------] /tmp/test
├── [drwx------] .githooks (1)
│ └── [-rwx------] pre-commit
├── [drwx------] d (3)
├── [-rw-------] f (4)
├── [drwx------] node_modules (2)
│ └── [drwx------] some-module
│ ├── [-rwx------] some-script
│ └── [-rwx------] some-script-without-execute-permission.sh
└── [-rwx------] s.sh (5)
-
the Git hooks directory and it’s files are ignored
-
the
node_modules
directories and their subdirectories and files are ignored -
directory permissions have been fixed
-
file permissions have been fixed
-
script permissions have been fixed
This script will rename a given file; the new filename will have a hash inserted, e.g.:
test.txt
⇒ test.da39a3e.txt
Use the optional second parameter -e
to print the new filename to stdout.
$ scripts/general/hash_filename.sh test.txt
$ scripts/general/hash_filename.sh test-echo.txt -e
test-echo.da39a3e.txt
$ find . \( -type f -name '*.jpg' -o -name '*.png' \) -exec scripts/general/hash_filename.sh {} \;
This script will invoke the given script repeatedly with a given delay between invocations and an initial delay.
The loop will finish when the given script has an exit code other than 100
.
#!/usr/bin/env sh
if [ ... ]; then
exit 0 # finish loop
fi
#!/usr/bin/env sh
exit 100 # infinite loop
$ scripts/general/loop.sh 10 10 some-script.sh
$ scripts/general/loop.sh 5 0 some-otherscript-with-parameters.sh a 1
-
$ scripts/general/loop.sh 1 0 scripts/general/counter.sh my-counter 5 12345
This script will invoke shellcheck on *.sh
files in the given directory ($PWD
if not given) and its subdirectories.
ℹ️
|
|
đź’ˇ
|
If you copy this script into a Node.js-based project you should exclude the
If you use Husky:
|
$ scripts/general/shellscript_check.sh
$ scripts/general/shellscript_check.sh /tmp
This script will format the shell script files in the given directory ($PWD
if not given) and its subdirectories according to Google Shell Style.
ℹ️
|
|
$ scripts/general/shellscript_format.sh
$ scripts/general/shellscript_format.sh /tmp/example
This script will check if the formatting of the shell script files in the given directory ($PWD
if not given) and its subdirectories adhere to the Google Shell Style.
This script’s exit code is 0
if all shell script files adhere to Google Shell Style or 1
if not.
The following parameters are optional:
v
-
display the paths of the files whose formatting does not adhere to Google Shell Style
ℹ️
|
|
$ scripts/general/shellscript_format_check.sh
$ echo $?
0
$ scripts/general/shellscript_format_check.sh /tmp/example
$ echo $?
1
$ scripts/general/shellscript_format_check.sh -v /tmp/example
/tmp/example/example.sh
$ echo $?
1
$ scripts/general/shellscript_format.sh /tmp/example
$ scripts/general/shellscript_format_check.sh /tmp/example
$ echo $?
0
This section contains scripts related to certificates:
-
Standalone certificates
ℹ️Standalone certificates are useful if you do not use mutual TLS or if the server’s certificate verifier supports using a trust anchor as both a CA certificate and an end-entity certificate.
- create_self_signed_cert
-
create a private key and self-signed certificate
- delete_self_signed_cert
-
delete the private key and self-signed certificate
- renew_self_signed_cert
-
renew the private key and self-signed certificate
- verify_self_signed_cert
-
verify the self-signed certificate
-
Certificate authority based certificates
ℹ️Certificate authority based certificates are useful if you want to use mutual TLS or if the server’s certificate verifier does not support using a trust anchor as both a CA certificate and an end-entity certificate (e.g. rusttls).
- create_ca
-
create a certificate authority and its root certificate
- copy_ca_root_cert
-
copies the root certificate of the certificate authority to a given directory
- create_ca_based_cert
-
create a private key and certificate based on a certificate authority
- delete_ca
-
delete the certificate authority
- copy_ca_based_cert
-
copies the private key and certificate based on a certificate authority to a given directory
- delete_ca_based_cert
-
delete the certificate authority based private key and certificate
- renew_ca_based_cert
-
renew the certificate authority based certificate
- verify_ca_based_cert
-
verify the certificate authority based certificate
đź’ˇ
|
Standalone certificates are simpler to use than certificate authority based certificates. |
This script will copy the private key key.pem
and the certificate cert.pem
from the certificate authority to the given directory ($PWD
if not given).
The given directory will be created if it does not exit yet.
The optional second parameter is the common name (localhost
if not given) of the certificate to be copied.
ℹ️
|
|
âť—
|
Ensure that the certificate authority has been created and a private key and certificate have been created before executing this script. |
|
Both If the given directory is inside a Git working tree the script will offer to modify the .gitignore file: WARNING: key.pem and/or cert.pem is not ignored in '/Users/example/tmp/.gitignore'
Do you want me to modify your .gitignore file (Y/N)? Related Script: git-cleanup |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:copy": "scripts/copy_ca_based_cert.sh certs"
}
} $ npm run cert:copy |
$ scripts/cert/copy_ca_based_cert.sh
$ scripts/cert/copy_ca_based_cert.sh ~/.local/secrets/certs/localhost
$ scripts/cert/copy_ca_based_cert.sh ~/.local/secrets/certs/https.internal https.internal
$ stat -f '%A %N' ~/.local/secrets/certs/https.internal/*.pem
600 /Users/example/.local/secrets/certs/https.internal/cert.pem
600 /Users/example/.local/secrets/certs/https.internal/key.pem
$ openssl x509 -ext subjectAltName -noout -in ~/.local/secrets/certs/https.internal/cert.pem
X509v3 Subject Alternative Name:
DNS:https.internal
This script will copy the root certificate ca.crt
of the certificate authority to the given directory ($PWD
if not given).
The given directory will be created if it does not exit yet.
ℹ️
|
|
âť—
|
Ensure that the certificate authority has been created. |
|
If the given directory is inside a Git working tree the script will offer to modify the .gitignore file: WARNING: ca.crt is not ignored in '/Users/example/tmp/.gitignore'
Do you want me to modify your .gitignore file (Y/N)? Related Script: git-cleanup |
$ scripts/cert/copy_ca_root_cert.sh
$ scripts/cert/copy_ca_root_cert.sh ~/.local/secrets/certs/ca-root
$ stat -f '%A %N' ~/.local/secrets/certs/ca-root/ca.crt
600 /Users/example/.local/secrets/certs/ca-root/ca.crt
$ openssl x509 -issuer -noout -in ~/.local/secrets/certs/ca-root/ca.crt
issuer=CN=Easy-RSA CA (2024-08-05, example-host)
This script will create a certificate authority and its root certificate ca.crt
.
The certificate authority will be created in the following location:
- Linux
-
-
$EASYRSA_PKI
(if set) -
$XDG_DATA_HOME/easyrsa/pki
(if$XDG_DATA_HOME
is set) -
$HOME/.easyrsa/pki
-
- macOS
-
-
$EASYRSA_PKI
(if set) -
$XDG_DATA_HOME/easyrsa/pki
(if$XDG_DATA_HOME
is set) -
$HOME/Library/Application Support/easyrsa/pki
-
The directory will be created if it does not exit yet.
The ca.crt
root certificate will be created in the directory above.
On CentOS, Debian, Fedora, Mint, Red Hat, and Ubuntu the CA root certificate will be added to the trust store and made available to Linux command-line tools.
On macOS, the CA root certificate will be added to the "System" keychain.
ℹ️
|
|
ℹ️
|
Chrome, Firefox, and Safari need no further configuration—​you should restart your browser though. Docker needs to be restarted. |
âť—
|
The following environment variables affect the creation of the root certificate:
|
âť—
|
When the CA root certificate expires, the entire CA, copied root certificates, and all created and copied certificates become invalid. It is not possible to renew the CA root certificate—​therefore you need to delete the CA and create a new one. If you have copied the root certificate to other locations you need to copy it again; if you have copied it into a Docker image you need to rebuild that image with the new certificate. You need to create new certificates based on this new CA. |
đź’ˇ
|
Add ~/.zshrc
export EASYRSA_PKI="${HOME}/.local/secrets/easyrsa/pki" |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:ca:create": "scripts/create_ca.sh"
}
} $ npm run cert:ca:create |
$ export EASYRSA_PKI="${HOME}/.local/secrets/easyrsa/pki"
$ scripts/cert/create_ca.sh
Created certificate authority 'Easy-RSA CA (2024-08-05, example-host)'; expires on: 2025-02-01; certificate:
/Users/example/.local/secrets/easyrsa/pki/ca.crt
...
$ date -Idate
2024-08-05
$ stat -f '%A %N' ~/.local/secrets/easyrsa/pki/ca.crt
600 /Users/example/.local/secrets/easyrsa/pki/ca.crt
Check your "System" keychain in Keychain Access; When using this certificate should be set to "Always Trust":
-
<<delete-ca,delete_ca>
This script will create a private key key.pem
and a certificate cert.pem
in the given directory ($PWD
if not given) based on a certificate authority.
The given directory will be created if it does not exit yet.
The optional second positive integer parameter (range: [1, 24855]) specifies the number of days the generated certificate is valid for; the default is 30 days.
The optional third parameter is the common name (localhost
if not given) of the certificate to be added.
ℹ️
|
The certificate created by this script is useful if you want to use mutual TLS or if the server’s certificate verifier does not support using a trust anchor as both a CA certificate and an end-entity certificate (e.g. rusttls). |
ℹ️
|
|
âť—
|
Ensure that the certificate authority has been created before executing this script. |
âť—
|
Ensure that the common name (set via the third parameter of this script) of the generated certificate has an entry in WARNING: /etc/hosts does not have an entry for '127.0.0.1 localhost https.internal' /etc/hosts
127.0.0.1 localhost ⇓ /etc/hosts
127.0.0.1 localhost https.internal |
|
Both If the given directory is inside a Git working tree the script will offer to modify the .gitignore file: WARNING: key.pem and/or cert.pem is not ignored in '/Users/example/tmp/.gitignore'
Do you want me to modify your .gitignore file (Y/N)? Related Script: git-cleanup |
|
Certificates with more than 180 days validity will not be accepted by the Apple platform or Safari. |
đź’ˇ
|
Copy the script (and its related create_ca, delete_ca, copy, delete, renew, and verify scripts) into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:ca:create": "scripts/create_ca.sh",
"cert:ca:delete": "scripts/delete_ca.sh",
"cert:copy": "scripts/copy_ca_based_cert.sh certs",
"cert:create": "scripts/create_ca_based_cert.sh certs"
"cert:delete": "scripts/delete_ca_based_cert.sh certs"
"cert:renew": "scripts/renew_ca_based_cert.sh certs"
"cert:verify": "scripts/verify_ca_based_cert.sh certs"
}
} $ npm run cert:ca:create
$ npm run cert:ca:delete
$ npm run cert:create
$ npm run cert:copy
$ npm run cert:delete
$ npm run cert:renew
$ npm run cert:verify |
$ scripts/cert/create_ca_based_cert.sh
$ date -Idate
2024-08-05
$ stat -f '%A %N' *.pem
600 cert.pem
600 key.pem
$ openssl x509 -ext subjectAltName -noout -in cert.pem
X509v3 Subject Alternative Name:
DNS:localhost
$ openssl x509 -startdate -noout -in cert.pem
notBefore=Aug 5 14:48:36 2024 GMT
$ openssl x509 -enddate -noout -in cert.pem
notAfter=Sep 4 14:48:36 2024 GMT
$ scripts/cert/create_ca_based_cert.sh certs
$ scripts/cert/create_ca_based_cert.sh . 10
$ scripts/cert/create_ca_based_cert.sh ~/.local/secrets/certs/https.internal 30 https.internal
Restart Firefox to refresh its CA root certificates from the system’s trust store.
$ scripts/cert/create_ca.sh
$ scripts/cert/create_ca_based_cert.sh ~/.local/secrets/certs/localhost
$ docker run --rm httpd:2.4.62-alpine3.20 cat /usr/local/apache2/conf/httpd.conf > httpd.conf.orig
$ sed -e 's/^#\(Include .*httpd-ssl.conf\)/\1/' \
-e 's/^#\(LoadModule .*mod_ssl.so\)/\1/' \
-e 's/^#\(LoadModule .*mod_socache_shmcb.so\)/\1/' \
httpd.conf.orig > httpd.conf
$ mkdir -p htdocs
$ printf '<!doctype html><title>Test</title><h1>Test</h1>' > htdocs/index.html
$ docker run -i -t --rm -p 3000:443 \
-v "$PWD/htdocs:/usr/local/apache2/htdocs:ro" \
-v "$PWD/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro" \
-v "$HOME/.local/secrets/certs/localhost/cert.pem:/usr/local/apache2/conf/server.crt:ro" \
-v "$HOME/.local/secrets/certs/localhost/key.pem:/usr/local/apache2/conf/server.key:ro" \
httpd:2.4.62-alpine3.20
$ scripts/cert/create_ca.sh
$ scripts/cert/create_ca_based_cert.sh ~/.local/secrets/certs/localhost
$ printf 'server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/private/server.key;
location / {
root /usr/share/nginx/html;
index index.html;
}
}' > nginx.conf
$ mkdir -p html
$ printf '<!doctype html><title>Test</title><h1>Test</h1>' > html/index.html
$ docker run -i -t --rm -p 3000:443 \
-v "$PWD/html:/usr/share/nginx/html:ro" \
-v "$PWD/nginx.conf:/etc/nginx/conf.d/default.conf:ro" \
-v "$HOME/.local/secrets/certs/localhost/cert.pem:/etc/ssl/certs/server.crt:ro" \
-v "$HOME/.local/secrets/certs/localhost/key.pem:/etc/ssl/private/server.key:ro" \
nginx:1.27.1-alpine3.20-slim
func main() {
const port = 3000
server := http.Server{
Addr: fmt.Sprintf(":%d", port),
ReadTimeout: 5 * time.Second,
WriteTimeout: 5 * time.Second,
IdleTimeout: 5 * time.Second,
Handler: http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
_, err := w.Write([]byte("<!doctype html><title>Test</title><h1>Test</h1>"))
if err != nil {
slog.Error("handle response", slog.Any("error", err))
}
}),
}
defer func(server *http.Server) {
if err := server.Close(); err != nil {
slog.Error("server close", slog.Any("error", err))
os.Exit(70)
}
}(&server)
slog.Info(fmt.Sprintf("Listen local: https://localhost:%d", port))
if err := server.ListenAndServeTLS("cert.pem", "key.pem"); err != nil {
slog.Error("listen", slog.Any("error", err))
os.Exit(70)
}
}
$ cd scripts/cert/go/stdlib
$ ../create_ca.sh
$ ../create_ca_based_cert.sh
$ go run server.go
['uncaughtException', 'unhandledRejection'].forEach((s) =>
process.once(s, (e) => {
console.error(e);
process.exit(70);
}),
);
['SIGINT', 'SIGTERM'].forEach((s) => process.once(s, () => process.exit(0)));
let https;
try {
https = await import('node:https');
} catch {
console.error('https support is disabled');
process.exit(78);
}
const port = 3000;
const server = https.createServer(
{
key: readFileSync('key.pem'),
cert: readFileSync('cert.pem'),
},
(_, w) => {
w.writeHead(200).end('<!doctype html><title>Test</title><h1>Test</h1>');
},
);
server.keepAliveTimeout = 5000;
server.requestTimeout = 5000;
server.timeout = 5000;
server.listen(port);
console.log(`Listen local: https://localhost:${port}`);
$ cd scripts/cert/js/nodejs
$ ../create_ca.sh
$ ../create_ca_based_cert.sh
$ node server.mjs
public final class Server {
public static void main(String[] args) throws Exception {
var port = 3000;
var server =
HttpsServer.create(
new InetSocketAddress(port),
0,
"/",
exchange -> {
var response = "<!doctype html><title>Test</title><h1>Test</h1>";
exchange.sendResponseHeaders(HTTP_OK, response.length());
try (var body = exchange.getResponseBody()) {
body.write(response.getBytes());
} catch (IOException e) {
LOGGER.log(SEVERE, "handle response", e);
}
});
server.setHttpsConfigurator(new HttpsConfigurator(newSSLContext()));
server.setExecutor(newVirtualThreadPerTaskExecutor());
server.start();
LOGGER.info(format("Listen local: https://localhost:%d", port));
}
static {
System.setProperty("sun.net.httpserver.maxReqTime", "5");
System.setProperty("sun.net.httpserver.maxRspTime", "5");
System.setProperty("sun.net.httpserver.idleInterval", "5000");
}
private static final Logger LOGGER = getLogger(MethodHandles.lookup().lookupClass().getName());
private static SSLContext newSSLContext() throws Exception {
var keyStorePath = requireNonNull(getenv("KEYSTORE_PATH"), "keystore path");
var keyStorePassword =
requireNonNull(getenv("KEYSTORE_PASS"), "keystore password").toCharArray();
var keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(newInputStream(Path.of(keyStorePath)), keyStorePassword);
var keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
keyManagerFactory.init(keyStore, keyStorePassword);
var trustManagerFactory =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
trustManagerFactory.init(keyStore);
var sslContext = SSLContext.getInstance("TLS");
sslContext.init(
keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), null);
return sslContext;
}
}
$ cd scripts/cert/java/stdlib
$ ../create_ca.sh
$ ../create_ca_based_cert.sh
$ openssl pkcs12 -export -in cert.pem -inkey key.pem -out certificate.p12 -name localhost -password pass:changeit
$ keytool -importkeystore -srckeystore certificate.p12 -srcstoretype pkcs12 -srcstorepass changeit -destkeystore keystore.jks -deststorepass changeit
$ KEYSTORE_PATH=keystore.jks KEYSTORE_PASS=changeit java Server.java
@SpringBootApplication
public class Server {
@RestController
static class Controller {
@GetMapping("/")
public String index() {
return "<!doctype html><title>Test</title><h1>Test</h1>";
}
}
public static void main(String[] args) {
SpringApplication.run(Server.class, args);
}
}
server.port=3000
server.tomcat.connection-timeout=5s
server.ssl.bundle=https
spring.ssl.bundle.pem.https.reload-on-update=true
spring.ssl.bundle.pem.https.keystore.certificate=cert.pem
spring.ssl.bundle.pem.https.keystore.private-key=key.pem
$ cd scripts/cert/java/spring-boot
$ ../create_ca.sh
$ ../create_ca_based_cert.sh
$ ./gradlew bootRun
ℹ️
|
Instead of using this script, you might want to use Quarkus' own certificate tooling. |
@Path("/")
public class Server {
@GET
@Produces(TEXT_HTML)
@RunOnVirtualThread
public String index() {
return "<!doctype html><title>Test</title><h1>Test</h1>";
}
}
quarkus.http.ssl-port=3000
quarkus.http.idle-timeout=5s
quarkus.http.read-timeout=5s
quarkus.http.ssl.certificate.reload-period=30s
quarkus.http.ssl.certificate.files=cert.pem
quarkus.http.ssl.certificate.key-files=key.pem
$ cd scripts/cert/java/quarkus
$ ../create_ca.sh
$ ../create_ca_based_cert.sh
$ ./gradlew quarkusDev
This script will create a private key key.pem
and a self-signed certificate cert.pem
in the given directory ($PWD
if not given).
The given directory will be created if it does not exit yet.
The optional second positive integer parameter (range: [1, 24855]) specifies the number of days the generated certificate is valid for; the default is 30 days.
The optional third parameter is the common name (localhost
if not given) of the certificate to be added.
On macOS, the certificate will be added to the "login" keychain also.
ℹ️
|
The certificate created by this script is useful if you do not use mutual TLS, the HTTP-client can be configured to ignore self-signed certificates, the server’s certificate verifier supports using a trust anchor as both a CA certificate and an end-entity certificate, or if you can add the certificate to your trust store. $ curl --insecure ...
$ wget --no-check-certificate ...
$ http --verify=no ... |
ℹ️
|
Chrome and Safari need no further configuration—​you should restart your browser though. For Firefox the created certificate has to be accepted manually. Docker needs to be restarted. |
âť—
|
Ensure that the common name (set via the third parameter of this script) of the generated certificate has an entry in WARNING: /etc/hosts does not have an entry for '127.0.0.1 localhost https.internal' /etc/hosts
127.0.0.1 localhost ⇓ /etc/hosts
127.0.0.1 localhost https.internal |
|
Both If the given directory is inside a Git working tree the script will offer to modify the .gitignore file: WARNING: key.pem and/or cert.pem is not ignored in '/Users/example/tmp/.gitignore'
Do you want me to modify your .gitignore file (Y/N)? Related Script: git-cleanup |
|
Certificates with more than 180 days validity will not be accepted by the Apple platform or Safari. |
đź’ˇ
|
Copy the script (and its related delete, renew, and verify scripts) into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:create": "scripts/create_self_signed_cert.sh certs"
"cert:delete": "scripts/delete_self_signed_cert.sh certs"
"cert:renew": "scripts/renew_self_signed_cert.sh certs"
"cert:verify": "scripts/verify_self_signed_cert.sh certs"
}
} $ npm run cert:create
$ npm run cert:delete
$ npm run cert:renew
$ npm run cert:verify |
$ scripts/cert/create_self_signed_cert.sh
Adding 'localhost' certificate (expires on: 2024-02-29) to keychain /Users/example/Library/Keychains/login.keychain-db ...
$ date -Idate
2024-01-30
$ stat -f '%A %N' *.pem
600 cert.pem
600 key.pem
$ openssl x509 -ext subjectAltName -noout -in cert.pem
X509v3 Subject Alternative Name:
DNS:localhost
$ openssl x509 -startdate -noout -in cert.pem
notBefore=Jan 30 16:25:43 2024 GMT
$ openssl x509 -enddate -noout -in cert.pem
notAfter=Feb 29 16:25:43 2024 GMT
$ scripts/cert/create_self_signed_cert.sh dist/etc/nginx
Adding 'localhost' certificate (expires on: 2024-02-29) to keychain /Users/example/Library/Keychains/login.keychain-db ...
$ scripts/cert/create_self_signed_cert.sh . 10
Adding 'localhost' certificate (expires on: 2024-02-09) to keychain /Users/example/Library/Keychains/login.keychain-db ...
$ scripts/cert/create_self_signed_cert.sh ~/.local/secrets/certs/https.internal 20 https.internal
Adding 'https.internal' certificate (expires on: 2024-02-19) to keychain /Users/example/Library/Keychains/login.keychain-db ...
Check your "login" keychain in Keychain Access; Secure Sockets Layer (SSL) should be set to "Always Trust":
You need to bypass the self-signed certificate warning by clicking on "Advanced" and then "Accept the Risk and Continue":
$ scripts/cert/create_self_signed_cert.sh ~/.local/secrets/certs/localhost
$ docker run --rm httpd:2.4.62-alpine3.20 cat /usr/local/apache2/conf/httpd.conf > httpd.conf.orig
$ sed -e 's/^#\(Include .*httpd-ssl.conf\)/\1/' \
-e 's/^#\(LoadModule .*mod_ssl.so\)/\1/' \
-e 's/^#\(LoadModule .*mod_socache_shmcb.so\)/\1/' \
httpd.conf.orig > httpd.conf
$ mkdir -p htdocs
$ printf '<!doctype html><title>Test</title><h1>Test</h1>' > htdocs/index.html
$ docker run -i -t --rm -p 3000:443 \
-v "$PWD/htdocs:/usr/local/apache2/htdocs:ro" \
-v "$PWD/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro" \
-v "$HOME/.local/secrets/certs/localhost/cert.pem:/usr/local/apache2/conf/server.crt:ro" \
-v "$HOME/.local/secrets/certs/localhost/key.pem:/usr/local/apache2/conf/server.key:ro" \
httpd:2.4.62-alpine3.20
$ scripts/cert/create_self_signed_cert.sh ~/.local/secrets/certs/localhost
$ printf 'server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/private/server.key;
location / {
root /usr/share/nginx/html;
index index.html;
}
}' > nginx.conf
$ mkdir -p html
$ printf '<!doctype html><title>Test</title><h1>Test</h1>' > html/index.html
$ docker run -i -t --rm -p 3000:443 \
-v "$PWD/html:/usr/share/nginx/html:ro" \
-v "$PWD/nginx.conf:/etc/nginx/conf.d/default.conf:ro" \
-v "$HOME/.local/secrets/certs/localhost/cert.pem:/etc/ssl/certs/server.crt:ro" \
-v "$HOME/.local/secrets/certs/localhost/key.pem:/etc/ssl/private/server.key:ro" \
nginx:1.27.1-alpine3.20-slim
func main() {
const port = 3000
server := http.Server{
Addr: fmt.Sprintf(":%d", port),
ReadTimeout: 5 * time.Second,
WriteTimeout: 5 * time.Second,
IdleTimeout: 5 * time.Second,
Handler: http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
_, err := w.Write([]byte("<!doctype html><title>Test</title><h1>Test</h1>"))
if err != nil {
slog.Error("handle response", slog.Any("error", err))
}
}),
}
defer func(server *http.Server) {
if err := server.Close(); err != nil {
slog.Error("server close", slog.Any("error", err))
os.Exit(70)
}
}(&server)
slog.Info(fmt.Sprintf("Listen local: https://localhost:%d", port))
if err := server.ListenAndServeTLS("cert.pem", "key.pem"); err != nil {
slog.Error("listen", slog.Any("error", err))
os.Exit(70)
}
}
$ cd scripts/cert/go/stdlib
$ ../create_self_signed_cert.sh
$ go run server.go
['uncaughtException', 'unhandledRejection'].forEach((s) =>
process.once(s, (e) => {
console.error(e);
process.exit(70);
}),
);
['SIGINT', 'SIGTERM'].forEach((s) => process.once(s, () => process.exit(0)));
let https;
try {
https = await import('node:https');
} catch {
console.error('https support is disabled');
process.exit(78);
}
const port = 3000;
const server = https.createServer(
{
key: readFileSync('key.pem'),
cert: readFileSync('cert.pem'),
},
(_, w) => {
w.writeHead(200).end('<!doctype html><title>Test</title><h1>Test</h1>');
},
);
server.keepAliveTimeout = 5000;
server.requestTimeout = 5000;
server.timeout = 5000;
server.listen(port);
console.log(`Listen local: https://localhost:${port}`);
$ cd scripts/cert/js/nodejs
$ ../create_self_signed_cert.sh
$ node server.mjs
public final class Server {
public static void main(String[] args) throws Exception {
var port = 3000;
var server =
HttpsServer.create(
new InetSocketAddress(port),
0,
"/",
exchange -> {
var response = "<!doctype html><title>Test</title><h1>Test</h1>";
exchange.sendResponseHeaders(HTTP_OK, response.length());
try (var body = exchange.getResponseBody()) {
body.write(response.getBytes());
} catch (IOException e) {
LOGGER.log(SEVERE, "handle response", e);
}
});
server.setHttpsConfigurator(new HttpsConfigurator(newSSLContext()));
server.setExecutor(newVirtualThreadPerTaskExecutor());
server.start();
LOGGER.info(format("Listen local: https://localhost:%d", port));
}
static {
System.setProperty("sun.net.httpserver.maxReqTime", "5");
System.setProperty("sun.net.httpserver.maxRspTime", "5");
System.setProperty("sun.net.httpserver.idleInterval", "5000");
}
private static final Logger LOGGER = getLogger(MethodHandles.lookup().lookupClass().getName());
private static SSLContext newSSLContext() throws Exception {
var keyStorePath = requireNonNull(getenv("KEYSTORE_PATH"), "keystore path");
var keyStorePassword =
requireNonNull(getenv("KEYSTORE_PASS"), "keystore password").toCharArray();
var keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(newInputStream(Path.of(keyStorePath)), keyStorePassword);
var keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
keyManagerFactory.init(keyStore, keyStorePassword);
var trustManagerFactory =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
trustManagerFactory.init(keyStore);
var sslContext = SSLContext.getInstance("TLS");
sslContext.init(
keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), null);
return sslContext;
}
}
$ cd scripts/cert/java/stdlib
$ ../create_self_signed_cert.sh
$ openssl pkcs12 -export -in cert.pem -inkey key.pem -out certificate.p12 -name localhost -password pass:changeit
$ keytool -importkeystore -srckeystore certificate.p12 -srcstoretype pkcs12 -srcstorepass changeit -destkeystore keystore.jks -deststorepass changeit
$ KEYSTORE_PATH=keystore.jks KEYSTORE_PASS=changeit java Server.java
@SpringBootApplication
public class Server {
@RestController
static class Controller {
@GetMapping("/")
public String index() {
return "<!doctype html><title>Test</title><h1>Test</h1>";
}
}
public static void main(String[] args) {
SpringApplication.run(Server.class, args);
}
}
server.port=3000
server.tomcat.connection-timeout=5s
server.ssl.bundle=https
spring.ssl.bundle.pem.https.reload-on-update=true
spring.ssl.bundle.pem.https.keystore.certificate=cert.pem
spring.ssl.bundle.pem.https.keystore.private-key=key.pem
$ cd scripts/cert/java/spring-boot
$ ../create_self_signed_cert.sh
$ ./gradlew bootRun
ℹ️
|
Instead of using this script, you might want to use Quarkus' own certificate tooling. |
@Path("/")
public class Server {
@GET
@Produces(TEXT_HTML)
@RunOnVirtualThread
public String index() {
return "<!doctype html><title>Test</title><h1>Test</h1>";
}
}
quarkus.http.ssl-port=3000
quarkus.http.idle-timeout=5s
quarkus.http.read-timeout=5s
quarkus.http.ssl.certificate.reload-period=30s
quarkus.http.ssl.certificate.files=cert.pem
quarkus.http.ssl.certificate.key-files=key.pem
$ cd scripts/cert/java/quarkus
$ ../create_self_signed_cert.sh
$ ./gradlew quarkusDev
This script will delete the certificate authority.
The certificate authority will be deleted from the following location:
- Linux
-
-
$EASYRSA_PKI
(if set) -
$XDG_DATA_HOME/easyrsa/pki
(if$XDG_DATA_HOME
is set) -
$HOME/.easyrsa/pki
-
- macOS
-
-
$EASYRSA_PKI
(if set) -
$XDG_DATA_HOME/easyrsa/pki
(if$XDG_DATA_HOME
is set) -
$HOME/Library/Application Support/easyrsa/pki
-
ℹ️
|
|
ℹ️
|
Chrome, Firefox, and Safari need no further configuration—​you should restart your browser though. Docker needs to be restarted. |
âť—
|
On CentOS, Debian, Fedora, Mint, Red Hat, and Ubuntu you need to remove the CA root certificate from your trust store manually. On macOS, you need to remove the CA root certificate from your "System" keychain in Keychain Access manually. |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:ca:delete": "scripts/delete_ca.sh"
}
} $ npm run cert:ca:delete |
$ ./delete_ca.sh
WARNING: You are about to delete the CA 'Easy-RSA CA (2024-08-05, example-host)':
/Users/example/.local/secrets/easyrsa/pki/
ca.crt
certs_by_serial/
index.txt
index.txt.attr
inline/
issued/
openssl-easyrsa.cnf
private/
reqs/
revoked/
serial
vars
All existing certificates based on this CA will become invalid.
Do you really want to irreversibly delete the CA (Y/N)? y
Please delete the 'Easy-RSA CA (2024-08-05, example-host)' certificate from your System keychain.
Also, please consult your browser's documentation on how to remove the CA certificate.
This script will remove the private key and the certificate from the certificate authority; and delete the key key.pem
and the certificate cert.pem
from the given directory ($PWD
if not given).
If the given directory is not $PWD
and is empty after the deletion it will be deleted as well.
The optional second parameter is the common name (localhost
if not given) of the certificate to be removed.
ℹ️
|
|
ℹ️
|
Chrome, Docker, Firefox, and Safari need no further configuration. |
âť—
|
Ensure that the certificate authority has been created. |
âť—
|
Additional copies will not be deleted by this script, i.e. only the private key and certificate in the given directory and the CA will be removed. |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:delete": "scripts/delete_ca_based_cert.sh certs"
}
} $ npm run cert:delete |
$ scripts/cert/delete_ca_based_cert.sh
$ scripts/cert/delete_ca_based_cert.sh ~/.local/secrets/certs/localhost
$ scripts/cert/delete_ca_based_cert.sh ~/.local/secrets/certs/https.internal https.internal
This script will delete the private key key.pem
and the self-signed certificate cert.pem
from the given directory ($PWD
if not given).
If the given directory is not $PWD
and is empty after the removal it will be removed as well.
The optional second parameter is the common name (localhost
if not given) of the certificate to be removed.
On macOS, the certificate will be removed from the "login" keychain also.
ℹ️
|
Chrome and Safari need no further configuration. For Firefox the old certificate has to be deleted manually. Docker needs to be restarted. |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:delete": "scripts/delete_self_signed_cert.sh certs"
}
} $ npm run cert:delete |
$ scripts/cert/delete_self_signed_cert.sh
Removing 'localhost' certificate from keychain /Users/example/Library/Keychains/login.keychain-db ...
$ scripts/cert/delete_self_signed_cert.sh ~/.local/secrets/certs/localhost
Removing 'localhost' certificate from keychain /Users/example/Library/Keychains/login.keychain-db ...
$ scripts/cert/delete_self_signed_cert.sh ~/.local/secrets/certs/https.internal https.internal
Removing 'https.internal' certificate from keychain /Users/example/Library/Keychains/login.keychain-db ...
You need to delete the certificate via Firefox > Preferences > Privacy & Security > Certificates
; click "View Certificates…​":
Click on the "Servers" tab:
Select the certificate and click "Delete…​".
This script will renew the certificate authority based certificate cert.pem
in the given directory ($PWD
if not given).
The optional second positive integer parameter (range: [1, 24855]) specifies the number of days the generated certificate is valid for; the default is 30 days.
The optional third parameter is the common name (localhost
if not given) of the certificate to be added.
ℹ️
|
|
ℹ️
|
Chrome, Docker, Firefox, and Safari need no further configuration. |
âť—
|
Ensure that the certificate authority has been created and a private key and certificate have been created before executing this script. |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:renew": "scripts/renew_ca_based_cert.sh certs"
}
} $ npm run cert:renew |
$ scripts/cert/renew_ca_based_cert.sh
$ scripts/cert/renew_ca_based_cert.sh dist/etc/nginx
$ scripts/cert/renew_ca_based_cert.sh . 30
$ openssl x509 -enddate -noout -in ~/.local/secrets/certs/https.internal/cert.pem
notAfter=Sep 16 11:54:50 2024 GMT
$ scripts/cert/renew_ca_based_cert.sh ~/.local/secrets/certs/https.internal 10 https.internal
$ date -Idate
2024-08-17
$ openssl x509 -startdate -noout -in ~/.local/secrets/certs/https.internal/cert.pem
notBefore=Aug 17 11:55:22 2024 GMT
$ openssl x509 -enddate -noout -in ~/.local/secrets/certs/https.internal/cert.pem
notAfter=Aug 27 11:55:22 2024 GMT
This script will renew the private key key.pem
and the self-signed certificate cert.pem
in the given directory ($PWD
if not given).
The optional second positive integer parameter (range: [1, 24855]) specifies the number of days the certificate is valid for; the default is 30 days.
The optional third parameter is the common name (localhost
if not given) of the certificate to be renewed.
On macOS, the certificate will be renewed in the "login" keychain also.
ℹ️
|
Chrome and Safari need no further configuration. For Firefox the old certificate has to be deleted manually and the renewed one has to be added manually. Docker needs to be restarted. |
|
Certificates with more than 180 days validity will not be accepted by the Apple platform or Safari. |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:renew": "scripts/renew_self_signed_cert.sh certs"
}
} $ npm run cert:renew |
$ scripts/cert/renew_self_signed_cert.sh
$ scripts/cert/renew_self_signed_cert.sh dist/etc/nginx
$ scripts/cert/renew_self_signed_cert.sh . 30
$ openssl x509 -enddate -noout -in ~/.local/secrets/certs/https.internal/cert.pem
notAfter=Aug 8 11:18:36 2024 GMT
$ scripts/cert/renew_self_signed_cert.sh ~/.local/secrets/certs/https.internal 10 https.internal
Removing 'https.internal' certificate from keychain /Users/example/Library/Keychains/login.keychain-db ...
Adding 'https.internal' certificate (expires on: 2024-09-17) to keychain /Users/example/Library/Keychains/login.keychain-db ...
$ date -Idate
2024-08-07
$ openssl x509 -startdate -noout -in ~/.local/secrets/certs/https.internal/cert.pem
notBefore=Aug 7 16:53:53 2024 GMT
$ openssl x509 -enddate -noout -in ~/.local/secrets/certs/https.internal/cert.pem
notAfter=Aug 17 16:53:53 2024 GMT
This script will verify the certificate authority based certificate cert.pem
in the given directory ($PWD
if not given).
On macOS, the CA root certificate will be verified in the "System" keychain also.
ℹ️
|
|
âť—
|
Ensure that the certificate authority has been created and a private key and certificate have been created before executing this script. |
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:verify": "scripts/verify_ca_based_cert.sh certs"
}
} $ npm run cert:verify |
$ scripts/cert/verify_ca_based_cert.sh
$ scripts/cert/verify_ca_based_cert.sh ~/.local/secrets/certs/localhost
keychain: "/Library/Keychains/System.keychain"
...
"labl"<blob>="Easy-RSA CA (2024-08-05, example-host)"
...
/Users/example/.local/secrets/certs/localhost/cert.pem
Certificate:
Issuer: CN=Easy-RSA CA (2024-08-05, example-host)
Validity
Not Before: Aug 5 14:48:36 2024 GMT
Not After : Sep 4 14:48:36 2024 GMT
Subject: CN=localhost
...
X509v3 Authority Key Identifier:
...
DirName:/CN=Easy-RSA CA (2024-08-05, example-host)
...
X509v3 Subject Alternative Name:
DNS:localhost
...
This script will verify the self-signed certificate cert.pem
in the given directory ($PWD
if not given).
On macOS, the certificate will be verified in the "login" keychain also.
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"cert:verify": "scripts/verify_self_signed_cert.sh certs"
}
} $ npm run cert:verify |
$ scripts/cert/verify_self_signed_cert.sh
$ scripts/cert/verify_self_signed_cert.sh ~/.local/secrets/certs/localhost
keychain: "/Users/example/Library/Keychains/login.keychain-db"
...
"labl"<blob>="localhost"
...
/Users/example/.local/secrets/certs/localhost/cert.pem
Certificate:
...
Issuer: CN=localhost
Validity
Not Before: Feb 28 11:54:32 2024 GMT
Not After : Mar 29 11:54:32 2024 GMT
Subject: CN=localhost
...
X509v3 Subject Alternative Name:
DNS:localhost
...
This section contains scripts related to Docker:
- docker_build
-
build the image
- docker_cleanup
-
remove all project-related containers, images, networks, and volumes
- docker_health
-
query the health status of the container
- docker_inspect
-
display detailed information on the container
- docker_logs
-
display the logs of the container
- docker_remove
-
remove the container and associated unnamed volumes
- docker_start
-
start the image
- docker_sh
-
open a shell into the running container
- docker_stop
-
stop the container
The scripts should be copied into a project, e.g.:
<project root directory>
├── Dockerfile
└── scripts
├── docker_build.sh
├── docker_cleanup.sh
├── ...
And then invoked from the directory containing the Dockerfile
:
$ scripts/docker_build.sh
ℹ️
|
All scripts need Docker to be installed. |
âť—
|
You should modify the readonly container_name="sdavids-shell-misc-docker-example"
readonly label_group='de.sdavids.docker.group'
readonly namespace='de.sdavids'
readonly repository='sdavids-shell-misc' The scripts expect the image to be named The scripts expect the container to be named |
đź’ˇ
|
You can try the scripts with the example Dockerfile: $ scripts/docker/docker_build.sh -d scripts/docker/Dockerfile
$ scripts/docker/docker_start.sh $ scripts/docker/docker_inspect.sh
$ scripts/docker/docker_sh.sh
$ scripts/docker/docker_health.sh
$ scripts/docker/docker_logs.sh $ scripts/docker/docker_stop.sh
$ scripts/docker/docker_remove.sh
$ scripts/docker/docker_cleanup.sh |
This script will build the ${namespace}/${repository}
image, i.e. the project’s image.
The following parameters are supported:
d
-
the path to the Dockerfile (
$PWD/Dockerfile
if not given) to be used n
-
do not use the cache when building the image
t
-
one of the two image’s tags (
local
if not given); the image will always be tagged withlatest
This script will add the org.opencontainers.image.revision
label to the image; its value is set via:
-
the value of a specific CI environment variable (e.g.
GITHUB_SHA
orCI_COMMIT_SHA
) -
the hash of the HEAD commit of the checked out branch; the suffix
-next
will be appended if the working tree is dirty.
This script will add the org.opencontainers.image.created
label to the image with the creation timestamp of the HEAD commit of the checked out branch as its value; the current time will be used if the working tree is dirty.
Alternatively, you can use the SOURCE_DATE_EPOCH environment variable to pass in the timestamp.
ℹ️
|
See the general notes of the Docker section. |
$ scripts/docker/docker_build.sh
$ scripts/docker/docker_build.sh -n
$ scripts/docker/docker_build.sh -d scripts/docker/Dockerfile
...
=> => naming to docker.io/sdavids-shell-misc/sdavids-shell-misc-docker:latest
=> => naming to docker.io/sdavids-shell-misc/sdavids-shell-misc-docker:local
...
$ scripts/docker/docker_build.sh -d scripts/docker/Dockerfile -t example
...
=> => naming to docker.io/sdavids-shell-misc/sdavids-shell-misc-docker:latest
=> => naming to docker.io/sdavids-shell-misc/sdavids-shell-misc-docker:example
...
"org.opencontainers.image.created":"2024-05-05T11:05:50Z"
...
"org.opencontainers.image.revision":"46cca5eff61eabb008ed43e81988e6a9099aa469"
...
$ touch dirty-repo
$ SOURCE_DATE_EPOCH=0 scripts/docker/docker_build.sh -d scripts/docker/Dockerfile -t 1.2.3
...
=> => naming to docker.io/sdavids-shell-misc/sdavids-shell-misc-docker:latest
=> => naming to docker.io/sdavids-shell-misc/sdavids-shell-misc-docker:1.2.3
...
"org.opencontainers.image.created":"1970-01-01T00:00:00Z"
...
"org.opencontainers.image.revision":"46cca5eff61eabb008ed43e81988e6a9099aa469-next"
...
This script removes all containers, images, networks, and volumes with the label ${label_group}=${repository}
, i.e. all project-related Docker artifacts.
ℹ️
|
The related scripts will ensure the See the general notes of the Docker section. |
$ scripts/docker/docker_cleanup.sh
This script will query the health status of the running container named ${container_name}
, i.e. the project’s container.
ℹ️
|
See the general notes of the Docker section. |
$ scripts/docker/docker_health.sh
This script will display detailed information on the container named ${container_name}
, i.e. the project’s container.
ℹ️
|
See the general notes of the Docker section. |
$ scripts/docker/docker_inspect.sh
This script will display the logs of the running container named ${container_name}
, i.e. the project’s container.
ℹ️
|
See the general notes of the Docker section. |
$ scripts/docker/docker_logs.sh
This script will remove the ${container_name}
container and any unnamed volumes associated with it, i.e. the project’s container and volumes.
The container will be stopped before removal.
ℹ️
|
See the general notes of the Docker section. |
$ scripts/docker/docker_remove.sh
This script will open a shell into the running container named ${container_name}
, i.e. the project’s container.
ℹ️
|
See the general notes of the Docker section. |
$ scripts/docker/docker_sh.sh
This script will start the ${image_name}
image with the tag local
, i.e. the project’s locally built image.
The container will be named ${container_name}
and labeled with ${label_group}=${repository}
.
ℹ️
|
See the general notes of the Docker section. |
âť—
|
This script is a starting point—​modify it to your project’s needs in conjunction with its Dockerfile. |
đź’ˇ
|
The provided example Dockerfile will start a simple HTTP server. |
$ scripts/docker/docker_start.sh
This script will stop the ${container_name}
container, i.e. the project’s container.
ℹ️
|
See the general notes of the Docker section. |
$ scripts/docker/docker_stop.sh
This section contains scripts related to Git:
- git_author_date_initial
-
displays the initial author dates of the committed files
- git_author_date_last
-
displays the last author dates of the committed files
- git_cleanup
-
remove untracked files from the working tree and optimize a local repository
- git_delete_branches
-
delete all local and remote branches from a given repository
- git_delete_dsstore_files
-
delete all
.DS_Store
files from a given repository - git_get_hash
-
return the hash of the HEAD commit
- git_get_short_hash
-
return the short hash of the HEAD commit
- git_is_working_tree_clean
-
check whether the Git working tree is clean
This script will display the initial author dates of the files of the given Git repository directory ($PWD
if not given).
If you use the optional second parameter then only the author date of the given file path will be displayed.
ℹ️
|
The initial author date is the date the original author added and committed the file to the Git repository. |
đź’ˇ
|
You can use this script to verify the initial publication year of your copyright statements. |
$ scripts/git/git_author_date_initial.sh /tmp/example
2022-04-16T15:59:50+02:00 a.txt
2022-04-16T15:59:50+02:00 b.txt
2022-04-16T16:00:14+02:00 c/d.txt
2023-04-16T16:00:41+02:00 e.txt
$ scripts/git/git_author_date_initial.sh /tmp/example | cut -c 1-4,26-
2022 a.txt
2022 b.txt
2022 c/d.txt
2023 e.txt
$ tree --noreport -a -I .git /tmp/example
/tmp/example
├── a.txt
├── b.txt
├── c
│ └── d.txt
└── e.txt
$ (cd /tmp/example && git --no-pager log --format=%aI --name-status)
2024-04-16T16:01:19+02:00
M a.txt
2023-04-16T16:00:41+02:00
A e.txt
2022-04-16T16:00:14+02:00
A c/d.txt
2022-04-16T15:59:50+02:00
A a.txt
A b.txt
$ (cd /tmp/example && git --no-pager log --format=%aI --name-status a.txt)
2024-04-16T16:01:19+02:00
M a.txt
2022-04-16T15:59:50+02:00
A a.txt
-
$ scripts/git/git_author_date_initial.sh /tmp/example 2022-04-16T15:59:50+02:00 a.txt 2022-04-16T15:59:50+02:00 b.txt 2022-04-16T16:00:14+02:00 c/d.txt 2023-04-16T16:00:41+02:00 e.txt $ scripts/git/git_author_date_last.sh /tmp/example 2024-04-16T16:01:19+02:00 a.txt 2022-04-16T15:59:50+02:00 b.txt 2022-04-16T16:00:14+02:00 c/d.txt 2023-04-16T16:00:41+02:00 e.txt $ scripts/git/git_author_date_initial.sh /tmp/example | cut -c 1-4,26- > initial.txt $ scripts/git/git_author_date_last.sh /tmp/example | cut -c 1-4,26- > last.txt $ diff initial.txt last.txt 1c1 < 2022 a.txt --- > 2024 a.txt
This script will display the last author dates of the files of the given Git repository directory ($PWD
if not given).
If you use the optional second parameter then only the author date of the given file path will be displayed.
ℹ️
|
The last author date is the date of the last Git status change to a committed file of a Git repository. |
đź’ˇ
|
You can use this script to verify the latest publication year of your copyright statements. |
$ scripts/git/git_author_date_last.sh /tmp/example
2024-04-16T16:01:19+02:00 a.txt
2022-04-16T15:59:50+02:00 b.txt
2022-04-16T16:00:14+02:00 c/d.txt
2023-04-16T16:00:41+02:00 e.txt
$ scripts/git/git_author_date_last.sh /tmp/example | cut -c 1-4,26-
2024 a.txt
2022 b.txt
2022 c/d.txt
2023 e.txt
$ scripts/git/git_author_date_last.sh /tmp/example a.txt
2024-04-16T16:01:19+02:00 a.txt
$ tree --noreport -a -I .git /tmp/example
/tmp/example
├── a.txt
├── b.txt
├── c
│ └── d.txt
└── e.txt
$ (cd /tmp/example && git --no-pager log --format=%aI --name-status)
2024-04-16T16:01:19+02:00
M a.txt
2023-04-16T16:00:41+02:00
A e.txt
2022-04-16T16:00:14+02:00
A c/d.txt
2022-04-16T15:59:50+02:00
A a.txt
A b.txt
$ (cd /tmp/example && git --no-pager log --format=%aI --name-status a.txt)
2024-04-16T16:01:19+02:00
M a.txt
2022-04-16T15:59:50+02:00
A a.txt
-
$ scripts/git/git_author_date_initial.sh /tmp/example 2022-04-16T15:59:50+02:00 a.txt 2022-04-16T15:59:50+02:00 b.txt 2022-04-16T16:00:14+02:00 c/d.txt 2023-04-16T16:00:41+02:00 e.txt $ scripts/git/git_author_date_last.sh /tmp/example 2024-04-16T16:01:19+02:00 a.txt 2022-04-16T15:59:50+02:00 b.txt 2022-04-16T16:00:14+02:00 c/d.txt 2023-04-16T16:00:41+02:00 e.txt $ scripts/git/git_author_date_initial.sh /tmp/example | cut -c 1-4,26- > initial.txt $ scripts/git/git_author_date_last.sh /tmp/example | cut -c 1-4,26- > last.txt $ diff initial.txt last.txt 1c1 < 2022 a.txt --- > 2024 a.txt
This script will do the following:
-
remove untracked files from the working tree
-
remove any remote-tracking references that no longer exist on the remote
-
cleanup remote branches
-
cleanup unnecessary files and optimize the local repository
The following parameters are supported:
d
-
the directory (
$PWD
if not given) containing the Git repository to clean e
-
prune entries older than the argument; default:
1.month.ago
n
-
Do not actually clean anything, just show what would be done.
|
This script will remove all untracked files. Sometimes you have untracked files which you do not want to be cleaned up. For example:
Add them to the exclusions to ensure that they will not be removed: scripts/git-cleanup.sh
git clean -qfdx \
+ -e .env \
-e .fleet \
-e .idea \
-e .classpath \
-e .project \
-e .settings \
-e .vscode \
+ -e *.pem \
. |
ℹ️
|
By default, the metadata files of Eclipse, JetBrains IDEs, and Visual Studio Code are not removed. |
đź’ˇ
|
Copy this script into your project: <project root directory>
└── scripts
├── git_cleanup.sh
├── ... |
$ scripts/git/git_cleanup.sh
$ scripts/git/git_cleanup.sh -d /tmp/example
$ scripts/git/git_cleanup.sh -n
$ scripts/git/git_cleanup.sh -e now
This script will delete all local and remote branches from the Git repository in the given directory ($PWD
if not given).
The optional second parameter is the name of the branch to keep (defaults to the value of the init.defaultBranch
Git configuration parameter or main
if not set).
|
The branches will be irreversibly deleted—​be careful, you have been warned! |
$ scripts/git/git_delete_branches.sh
$ scripts/git/git_delete_branches.sh /tmp/example
$ scripts/git/git_delete_branches.sh /tmp/example master
This script will delete all .DS_Store
files from the Git repository in the given directory ($PWD
if not given).
âť—
|
This script will remove all You need to commit the changes afterward (if any |
$ scripts/git/git_delete_dsstore_files.sh
$ tree --noreport -a -I .git /tmp/example
/tmp/example
├── .DS_Store
├── a
│ └── .DS_Store
├── b
│ └── .DS_Store
└── c
└── .DS_Store
$ cd /tmp/example
$ git ls-tree --full-tree -r --name-only HEAD
.DS_Store
a/.DS_Store
$ git diff --staged --name-only
b/.DS_Store
$ cd -
$ scripts/git/git_delete_dsstore_files.sh /tmp/example
The repository at '/private/tmp/example' does not ignore '.DS_Store' files.
You should add '.DS_Store' to your global exclusion file:
git config --global core.excludesfile
And to your project's exclusion file:
/private/tmp/example/.gitignore
---
D .DS_Store
D a/.DS_Store
$ git commit -s -S -m 'chore: removed .DS_Store files'
$ tree --noreport -a -I .git /tmp/example
/tmp/example
├── a
├── b
└── c
This script will return the hash of the HEAD commit of the checked out branch of the given Git repository directory ($PWD
if not given).
The suffix -dirty
will be appended if the working tree is dirty.
$ scripts/git/git_get_hash.sh
844881d148be35d7c0a9bcbf5ba23ab79cf14c6e
$ touch a
$ scripts/git/git_get_hash.sh
844881d148be35d7c0a9bcbf5ba23ab79cf14c6e-dirty
-
$ scripts/git/git_get_hash.sh 844881d148be35d7c0a9bcbf5ba23ab79cf14c6e $ scripts/git/git_get_short_hash.sh 844881d
This script will return the short hash of the HEAD commit of the checked out branch of the given Git repository directory ($PWD
if not given).
The suffix -dirty
will be appended if the working tree is dirty.
The length of the hash can be configured via the optional second parameter (range: [4, 40] for SHA-1 object names or [4, 64] for SHA-256 object names); the default is determined by the core.abbrev
Git configuration variable.
đź’ˇ
|
To get a consistent hash length across systems you should either
|
$ scripts/git/git_get_short_hash.sh
844881d
$ scripts/git/git_get_short_hash.sh path/to/git/repository
dbd0ffb
$ scripts/git/git_get_short_hash.sh . 10
844881d148
$ git config --local core.abbrev 20
$ scripts/git/git_get_short_hash.sh
844881d148be35d7c0a9
$ touch a
$ scripts/git/git_get_short_hash.sh
844881d-dirty
-
$ scripts/git/git_get_short_hash.sh 844881d $ scripts/git/git_get_hash.sh 844881d148be35d7c0a9bcbf5ba23ab79cf14c6e
This script will check whether the Git working tree in the given directory ($PWD
if not given) is clean.
$ scripts/git/git_is_working_tree_clean.sh
$ echo $?
- 0
-
the Git working tree of the given directory is clean
- 1
-
the Git working tree of the given directory is dirty
- 2
-
the given directory is not a Git repository
This section contains scripts related to GitHub:
- gh_delete_workflow_runs
-
deletes the GitHub Action workflow runs of a given repository
This script will the delete the GitHub Action workflow runs of a given repository.
The following parameters are required:
k
-
the number of workflow runs to keep (range: [0, n]); older workflow runs will be deleted first
ℹ️-k 0
-
will delete all workflow runs.
-k n
-
if
n
is greater than the number of existing workflow runs then no runs will be deleted
r
-
the name of the repository (without the
.git
extension) for which the workflow runs should be deleted; the name is not case-sensitive
The following environment variable is required:
GH_DELETE_WORKFLOW_RUNS_TOKEN_FILE
-
the path to the file containing the GitHub access token having the
actions:rw
permissionℹ️Information on how to create an GitHub access token can be found at: GitHub - Creating a fine-grained personal access token
ℹ️
|
The GitHub API only returns 1000 results—so you might have to execute this script several times if you have more than 1000 workflow runs. |
âť—
|
This script does not handle concurrent changes to the workflow runs, i.e. this script might fail if someone else adds or deletes workflow runs while this script is running. |
|
The workflow runs will be irreversibly deleted by this script—​be careful, you have been warned! |
$ GH_DELETE_WORKFLOW_RUNS_TOKEN_FILE=~/.local/secrets/gh/gh-actions-rw scripts/gh/gh_delete_workflow_runs.sh -r sdavids-shell-misc -k 0
WARNING: The following 2 workflow run(s) will be deleted:
[
{
"display_title": "feat: 2",
"created_at": "2024-08-06T13:24:56Z",
"run_started_at": "2024-08-06T13:24:56Z",
"html_url": "https://github.com/sdavids/sdavids-shell-misc/actions/runs/90267441015"
},
{
"display_title": "feat: 1",
"created_at": "2024-08-06T11:52:16Z",
"run_started_at": "2024-08-06T11:52:16Z",
"html_url": "https://github.com/sdavids/sdavids-shell-misc/actions/runs/90265969659"
}
]
Do you really want to irreversibly delete the 2 workflow run(s) (Y/N)? n
$ GH_DELETE_WORKFLOW_RUNS_TOKEN_FILE=~/.local/secrets/gh/gh-actions-rw scripts/gh/gh_delete_workflow_runs.sh -r sdavids-shell-misc -k 1
WARNING: The following 1 workflow run(s) will be deleted:
[
{
"display_title": "feat: 1",
"created_at": "2024-08-06T11:52:16Z",
"run_started_at": "2024-08-06T11:52:16Z",
"html_url": "https://github.com/sdavids/sdavids-shell-misc/actions/runs/90265969659"
}
]
Do you really want to irreversibly delete the 1 workflow run(s) (Y/N)? y
This section contains scripts related to Gradle:
- check_reproducible_build_gradle
-
checks whether a Gradle build produces reproducible JARs
Related: Gradle Functions
This script will check whether the Gradle build in the given directory ($PWD
if not given) produces reproducible JARs.
In case of a non-reproducible build, the output of this script will show the affected JARs:
--- .checksums/build-1 2024-03-11 03:40:49
+++ .checksums/build-2 2024-03-11 03:40:50
@@ -1,2 +1,2 @@
-62f0ce3946967ff3be58d74b68d40fd438a4cb56d9ec9d3a434b1943db92ca55 ./lib/build/libs/lib-sources.jar
-8cf6cb254443141ca847ec73c6402581e8d37bab59ceefd88926c521812c4390 ./lib/build/libs/lib.jar
+099cebb5a0d6faa8700782877f0c09ef3891bdc861636a81839dd3e7024963f5 ./lib/build/libs/lib-sources.jar
+e2d5ad0d51a030fe23f94b039e3572b54af5a35c4943eaad4e340b91edc3ab2c ./lib/build/libs/lib.jar
ℹ️
|
|
đź’ˇ
|
Copy the script into your Gradle project: .
├── scripts
│ └── check_reproducible_build_gradle.sh
└── gradlew $ scripts/check_reproducible_build_gradle.sh |
đź’ˇ
|
Here are snippets for a reproducible Gradle build: build.gradle.kts
import java.time.Instant
import java.time.OffsetDateTime
import java.time.ZoneOffset
import java.time.format.DateTimeFormatter.ISO_LOCAL_DATE
import java.time.format.DateTimeFormatter.ISO_OFFSET_TIME
import java.time.temporal.ChronoUnit.SECONDS
// https://reproducible-builds.org/docs/source-date-epoch/
val buildTimeAndDate: OffsetDateTime = OffsetDateTime.ofInstant(
(System.getenv("SOURCE_DATE_EPOCH") ?: "").toLongOrNull()?.let {
Instant.ofEpochSecond(it)
} ?: Instant.now().truncatedTo(SECONDS),
ZoneOffset.UTC,
)
tasks.withType<AbstractArchiveTask>().configureEach {
isPreserveFileTimestamps = false
isReproducibleFileOrder = true
filePermissions {
unix(644)
}
dirPermissions {
unix(755)
}
}
tasks.withType<Jar>().configureEach {
manifest {
attributes(
"Build-Date" to ISO_LOCAL_DATE.format(buildTimeAndDate),
"Build-Time" to ISO_OFFSET_TIME.format(buildTimeAndDate),
)
}
} build.sh
#!/usr/bin/env sh
set -eu
# https://reproducible-builds.org/docs/source-date-epoch/#git
SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-$(git log --max-count=1 --pretty=format:%ct)}"
export SOURCE_DATE_EPOCH
./gradlew \
--configuration-cache \
--no-build-cache \
clean \
build $ env SOURCE_DATE_EPOCH="$(git log --max-count=1 --pretty=format:%ct)" ./gradlew --configuration-cache --no-build-cache clean build .github/workflows/ci.yaml
# ...
jobs:
build:
# ...
steps:
# ...
- name: Set SOURCE_DATE_EPOCH
run: |
echo "SOURCE_DATE_EPOCH=$(git log --max-count=1 --pretty=format:%ct)" >> "$GITHUB_ENV"
- name: Run build
run: ./gradlew build |
$ scripts/gradle/check_reproducible_build_gradle.sh
$ scripts/gradle/check_reproducible_build_gradle.sh /tmp/example
This section contains scripts related to Java:
- java_format
-
format the Java sources in the given directory
- java_format_check
-
check the formatting of Java sources in the given directory
- jar_java_versions
-
display the Java and class file versions contained in a JAR
- jar_min_java_version
-
display the minimum Java runtime version for a JAR
Related: Java Functions
This script will format the *.java
files in the given directory ($PWD
if not given) and its subdirectories according to the Google Java Style.
ℹ️
|
|
ℹ️
|
Both |
âť—
|
This script needs internet access if it does not find the cached JAR file. It will download and cache the google-java-format JAR. The JAR is cached in the following location (in order of preference):
|
đź’ˇ
|
If you are using Gradle or Maven you might want to use Spotless instead of this script:
If you are using a JetBrains IDE you might want to use the google-java-format plugin. |
$ scripts/java/java_format.sh
$ scripts/java/java_format.sh /tmp/example/src/main/java
This script will check if the formatting of the *.java
files in the given directory ($PWD
if not given) and its subdirectories adhere to the Google Java Style.
This script’s exit code is 0
if all *.java
files adhere to Google Java Style or 1
if not.
The following parameters are optional:
v
-
display the paths of the files whose formatting does not adhere to Google Java Style
ℹ️
|
|
ℹ️
|
Both |
âť—
|
This script needs internet access if it does not find the cached JAR file. It will download and cache the google-java-format JAR. The JAR is cached in the following location (in order of preference):
|
đź’ˇ
|
If you are using Gradle or Maven you might want to use Spotless instead of this script:
If you are using a JetBrains IDE you might want to use the google-java-format plugin. |
$ scripts/java/java_format_check.sh
$ echo $?
0
$ scripts/java/java_format_check.sh /tmp/example/src/main/java
$ echo $?
1
$ scripts/java/java_format_check.sh -v /tmp/example/src/main/java
/tmp/example/src/main/java/Example.java
$ echo $?
1
$ scripts/java/java_format.sh /tmp/example/src/main/java
$ scripts/java/java_format_check.sh /tmp/example/src/main/java
$ echo $?
0
This script will display the Java and class file versions used by the classes within the given JAR file.
If you use the optional second positive integer parameter (range: [5, n)) only non-matching versions will be displayed and if there is at least one mismatch the exit code will be 100
instead of 0
.
ℹ️
|
|
đź’ˇ
|
This script is useful to verify that you have not inadvertently forgotten the release option while building your classes if you want to target a specific Java version. |
$ curl -L -O -s https://repo1.maven.org/maven2/org/junit/jupiter/junit-jupiter-api/5.11.0/junit-jupiter-api-5.11.0.jar
$ jar_is_multi_release junit-jupiter-api-5.11.0.jar
0
$ scripts/java/jar_java_versions.sh junit-jupiter-api-5.11.0.jar
Java Version: 8; Class File Version: 52
$ scripts/java/jar_java_versions.sh junit-jupiter-api-5.11.0.jar 8
$ echo $?
0
$ scripts/java/jar_java_versions.sh junit-jupiter-api-5.11.0.jar 11
Java Version: 8; Class File Version: 52
$ echo $?
100
$ curl -L -O -s https://github.com/adoble/adr-j/releases/download/v3.3.1/adr-j.jar
$ jar_is_multi_release adr-j.jar
0
$ scripts/java/jar_java_versions.sh adr-j.jar
Java Version: 5; Class File Version: 49
Java Version: 8; Class File Version: 52
Java Version: 21; Class File Version: 65
$ echo $?
0
$ scripts/java/jar_java_versions.sh adr-j.jar 5
Java Version: 8; Class File Version: 52
Java Version: 21; Class File Version: 65
$ echo $?
100
$ scripts/java/jar_java_versions.sh adr-j.jar 8
Java Version: 5; Class File Version: 49
Java Version: 21; Class File Version: 65
$ echo $?
100
$ scripts/java/jar_java_versions.sh adr-j.jar 21
Java Version: 5; Class File Version: 49
Java Version: 8; Class File Version: 52
$ echo $?
100
$ scripts/java/jar_java_versions.sh adr-j.jar 22
Java Version: 5; Class File Version: 49
Java Version: 8; Class File Version: 52
Java Version: 21; Class File Version: 65
$ echo $?
100
$ curl -L -O -s https://repo1.maven.org/maven2/net/bytebuddy/byte-buddy/1.14.19/byte-buddy-1.14.19.jar
$ jar_is_multi_release byte-buddy-1.14.19.jar
1
$ scripts/java/jar_java_versions.sh byte-buddy-1.14.19.jar
Java Version: 5; Class File Version: 49
Java Version: 6; Class File Version: 50
$ scripts/java/jar_java_versions.sh byte-buddy-1.14.19.jar 5
Java Version: 6; Class File Version: 50
$ echo $?
100
This script will display the minimum Java runtime version necessary for the given JAR file.
The minimum Java runtime version is the determined by evaluating the classes contained in the JAR file: The class with the highest Java (class file) version determines version displayed by this script.
ℹ️
|
|
|
For a multi-release JAR this script will not necessarily report the correct version. |
$ curl -L -O -s https://repo1.maven.org/maven2/org/junit/jupiter/junit-jupiter-api/5.11.0/junit-jupiter-api-5.11.0.jar
$ scripts/java/jar_java_versions.sh junit-jupiter-api-5.11.0.jar
Java Version: 8; Class File Version: 52
$ jar_is_multi_release junit-jupiter-api-5.11.0.jar
0
$ scripts/java/jar_min_java_version.sh junit-jupiter-api-5.11.0.jar
8
$ curl -L -O -s https://github.com/adoble/adr-j/releases/download/v3.3.1/adr-j.jar
$ scripts/java/jar_java_versions.sh adr-j.jar
Java Version: 5; Class File Version: 49
Java Version: 8; Class File Version: 52
Java Version: 21; Class File Version: 65
$ jar_is_multi_release adr-j.jar
0
$ scripts/java/jar_min_java_version.sh adr-j.jar
21
$ echo $?
100
$ scripts/java/jar_java_versions.sh adr-j.jar 8
Java Version: 5; Class File Version: 49
Java Version: 21; Class File Version: 65
$ echo $?
100
$ scripts/java/jar_java_versions.sh adr-j.jar 21
Java Version: 5; Class File Version: 49
Java Version: 8; Class File Version: 52
$ echo $?
100
$ scripts/java/jar_java_versions.sh adr-j.jar 22
Java Version: 5; Class File Version: 49
Java Version: 8; Class File Version: 52
Java Version: 21; Class File Version: 65
$ echo $?
100
$ curl -L -O -s https://repo1.maven.org/maven2/net/bytebuddy/byte-buddy/1.14.19/byte-buddy-1.14.19.jar
$ scripts/java/jar_java_versions.sh byte-buddy-1.14.19.jar
Java Version: 5; Class File Version: 49
Java Version: 6; Class File Version: 50
$ jar_is_multi_release byte-buddy-1.14.19.jar
1
$ scripts/java/jar_java_versions.sh byte-buddy-1.14.19.jar
Java Version: 5; Class File Version: 49
Java Version: 6; Class File Version: 50
$ scripts/java/jar_min_java_version.sh byte-buddy-1.14.19.jar
6
This section contains scripts related to Keycloak:
- keycloak_access_token
-
retrieve a Keycloak JWT access token
- keycloak_access_token_decoded
-
retrieve and decode a Keycloak JWT access token
- keycloak_decode_access_token
-
decode a Keycloak JWT access token
This script will retrieve a Keycloak JWT access token for the given user.
âť—
|
You should change the realm, scope, and client ID: scripts/keycloak/keycloak_access_token.sh
readonly realm='my-realm'
readonly realm_scope='my-realm-scope'
readonly realm_client_id='my-realm-client' Depending on your setup, you might have to change the protocol, host, port, or proxy path prefix, e.g. if your Keycloak instance is accessible at scripts/keycloak/keycloak_access_token.sh
readonly keycloak_protocol='http'
readonly keycloak_host='localhost'
readonly keycloak_port=9050
readonly keycloak_proxy_path_prefix='/keycloak' |
$ scripts/keycloak/keycloak_access_token.sh my-user
Password:
eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJhSGJ2MFdqT2RsR19wM1BEb0ZvLU1KQ3NuWEk0Ny0xOGdhTjcycndkTnlBIn0.eyJleHAiOjE3MDY0NzI0MTIsImlhdCI6MTcwNjQ3MjExMiwianRpIjoiY2FhZGZhNjUtNWQ5NC00YTk2LWE3YmYtNGI3ODFlY2NjZjlkIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3JlYWxtcy9teS1yZWFsbSIsInN1YiI6ImMxYmYwOTRmLWIzOTctNGYxMy05Y2VhLTUyYTdjYmNlNjRkMCIsInR5cCI6IkJlYXJlciIsImF6cCI6Im15LXJlYWxtLWNsaWVudCIsInNlc3Npb25fc3RhdGUiOiI0NWYyMzE2YS01ZjNiLTRkYzMtYmRiYy0yZmRjYThjODA1NGQiLCJhbGxvd2VkLW9yaWdpbnMiOlsiLyoiXSwic2NvcGUiOiJteS1yZWFsbS1zY29wZSIsInNpZCI6IjQ1ZjIzMTZhLTVmM2ItNGRjMy1iZGJjLTJmZGNhOGM4MDU0ZCJ9.TDGa-i6ipWmxnfFMOehc2j86p3oa5laNlytBc5PFcJeyfgNOYc7SLJZo5OCV7pVyz4VHiv8BKkG2JI56Usg_1fmP-GtFjPojWjf7gQ5FgtncL7RxTKzPtzDQiYRvqS6agHzfd_Q2zP91NVxhU7_-rKnqV3O5Ka8x5qxEaqwvwsT1aZP5KhNDS8haRlOLLSRmTB5Nx2OZSkms6Aok4NGr461xEXu_bxFzbnlLOndG7frbQyY272Oyo6ahtClxbj414tlEsdUMzE8MApPdsWVtW7afMgKBOXyn25RJck7yoHoLgT9pfe9j32aR6syYUaSfSU-ODdCUhxFMZ7lfaFvREA
This script will retrieve a Keycloak JWT access token for the given user and decode it.
ℹ️
|
This script combines keycloak_access-token and keycloak_decode-access-token. |
$ scripts/keycloak/keycloak_access_token_decoded.sh my-user
Password:
eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJhSGJ2MFdqT2RsR19wM1BEb0ZvLU1KQ3NuWEk0Ny0xOGdhTjcycndkTnlBIn0.eyJleHAiOjE3MDY0NzIzNDksImlhdCI6MTcwNjQ3MjA0OSwianRpIjoiNDgyMTAxM2MtYjQ0NC00MjM2LWFkOTUtOWM2MmQyNzc4OGFlIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3JlYWxtcy9teS1yZWFsbSIsInN1YiI6ImMxYmYwOTRmLWIzOTctNGYxMy05Y2VhLTUyYTdjYmNlNjRkMCIsInR5cCI6IkJlYXJlciIsImF6cCI6Im15LXJlYWxtLWNsaWVudCIsInNlc3Npb25fc3RhdGUiOiI0MGM2YjdlZi02MjBlLTQ0MGYtOTQ0Mi05Nzc0MWYyYjhkMjMiLCJhbGxvd2VkLW9yaWdpbnMiOlsiLyoiXSwic2NvcGUiOiJteS1yZWFsbS1zY29wZSIsInNpZCI6IjQwYzZiN2VmLTYyMGUtNDQwZi05NDQyLTk3NzQxZjJiOGQyMyJ9.EOEaOq_HFsQ8_yAPu-zszw2dOM0gS7cUNRhXmKdnGlD1TFVA33rT2cUiXnVVGNGtXXcIbghp3uCSZLUwYrGwDPUnYJbrNycPsPy6iah07oUaakEhsTnYqGmdYgXVw9T7Q2xoGhwtD5_hpgwwvkHCMBbJ8tZBefDXzy1nCS2rzJCgVsZylvfGMPwHO5gAQr5RYrD1o_9TTPLTjDPNtCvYXp1MaVat7fqibiH_ioXFAm2NxIIOrwVGRZH5jW1rdX6gURjoyfYXi9w56SVbzIh4lgZI48rnnxHjRLop8ZuWFcmtx6ykY45MtMFUCE6gNTZFgJmTlYLGQIe9tYmO6Kngow
{
"alg": "RS256",
"typ": "JWT",
"kid": "aHbv0WjOdlG_p3PDoFo-MJCsnXI47-18gaN72rwdNyA"
}
{
"exp": 1706472349,
"iat": 1706472049,
"jti": "4821013c-b444-4236-ad95-9c62d27788ae",
"iss": "http://localhost:8080/realms/my-realm",
"sub": "c1bf094f-b397-4f13-9cea-52a7cbce64d0",
"typ": "Bearer",
"azp": "my-realm-client",
"session_state": "40c6b7ef-620e-440f-9442-97741f2b8d23",
"allowed-origins": [
"/*"
],
"scope": "my-realm-scope",
"sid": "40c6b7ef-620e-440f-9442-97741f2b8d23"
}
This script will decode the given Keycloak JWT access token.
ℹ️
|
|
đź’ˇ
|
Online JWT Decoder |
$ scripts/keycloak/keycloak_decode_access_token.sh eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJhSGJ2MFdqT2RsR19wM1BEb0ZvLU1KQ3NuWEk0Ny0xOGdhTjcycndkTnlBIn0.eyJleHAiOjE3MDY0NzI0MTIsImlhdCI6MTcwNjQ3MjExMiwianRpIjoiY2FhZGZhNjUtNWQ5NC00YTk2LWE3YmYtNGI3ODFlY2NjZjlkIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3JlYWxtcy9teS1yZWFsbSIsInN1YiI6ImMxYmYwOTRmLWIzOTctNGYxMy05Y2VhLTUyYTdjYmNlNjRkMCIsInR5cCI6IkJlYXJlciIsImF6cCI6Im15LXJlYWxtLWNsaWVudCIsInNlc3Npb25fc3RhdGUiOiI0NWYyMzE2YS01ZjNiLTRkYzMtYmRiYy0yZmRjYThjODA1NGQiLCJhbGxvd2VkLW9yaWdpbnMiOlsiLyoiXSwic2NvcGUiOiJteS1yZWFsbS1zY29wZSIsInNpZCI6IjQ1ZjIzMTZhLTVmM2ItNGRjMy1iZGJjLTJmZGNhOGM4MDU0ZCJ9.TDGa-i6ipWmxnfFMOehc2j86p3oa5laNlytBc5PFcJeyfgNOYc7SLJZo5OCV7pVyz4VHiv8BKkG2JI56Usg_1fmP-GtFjPojWjf7gQ5FgtncL7RxTKzPtzDQiYRvqS6agHzfd_Q2zP91NVxhU7_-rKnqV3O5Ka8x5qxEaqwvwsT1aZP5KhNDS8haRlOLLSRmTB5Nx2OZSkms6Aok4NGr461xEXu_bxFzbnlLOndG7frbQyY272Oyo6ahtClxbj414tlEsdUMzE8MApPdsWVtW7afMgKBOXyn25RJck7yoHoLgT9pfe9j32aR6syYUaSfSU-ODdCUhxFMZ7lfaFvREA
{
"alg": "RS256",
"typ": "JWT",
"kid": "aHbv0WjOdlG_p3PDoFo-MJCsnXI47-18gaN72rwdNyA"
}
{
"exp": 1706472412,
"iat": 1706472112,
"jti": "caadfa65-5d94-4a96-a7bf-4b781ecccf9d",
"iss": "http://localhost:8080/realms/my-realm",
"sub": "c1bf094f-b397-4f13-9cea-52a7cbce64d0",
"typ": "Bearer",
"azp": "my-realm-client",
"session_state": "45f2316a-5f3b-4dc3-bdbc-2fdca8c8054d",
"allowed-origins": [
"/*"
],
"scope": "my-realm-scope",
"sid": "45f2316a-5f3b-4dc3-bdbc-2fdca8c8054d"
}
This section contains scripts related to LaTeX:
- latex_pdf_build
-
typeset a given root document into a PDF
This script will typeset a given root document into a PDF.
The following parameter is required:
r
-
the root document file; relative to the source directory
-s
The following parameters are optional:
f
-
delete the output directory before typesetting
n
-
turn caching off, i.e. the auxiliary files will not be kept
o
-
the output directory (
$PWD/build
if not given) s
-
the source the directory (
$PWD/src
if not given) v
-
show diagnostic messages during typesetting
ℹ️
|
Docker needs to be installed. |
ℹ️
|
The PDF is generated using the XeTeX typesetting engine. |
$ scripts/latex/latex_pdf_build.sh -r main.tex
$ tree --noreport /tmp/example
/tmp/example
└── src
└── main.tex
$ scripts/latex/latex_pdf_build.sh -r main.tex -s /tmp/example/src -o /tmp/example/out
$ tree --noreport /tmp/example
/tmp/example
├── out
│ ├── aux (1)
│ │ ├── main.aux
│ │ ├── main.fdb_latexmk
│ │ ├── main.fls
│ │ ├── main.log
│ │ └── main.xdv
│ └── main.pdf (2)
└── src
└── main.tex
$ scripts/latex/latex_pdf_build.sh -r main.tex -s /tmp/example/src -o /tmp/example/out -f -n
$ tree --noreport /tmp/example
/tmp/example
├── out (3)
│ └── main.pdf
└── src
└── main.tex
-
these auxiliary files speed up subsequent PDF generations
-
the typeset PDF
-
the output directory has been cleaned (
-f
) and no auxiliary files have been created (-n
)
$ cd scripts/latex/example
$ ../latex_pdf_build.sh -r main.tex
⇒ build/main.pdf
Typeset PDF: main.pdf
-
$ exiftool build/main.pdf ExifTool Version Number : 12.76 File Name : main.pdf Directory : build File Size : 92 kB File Modification Date/Time : 2024:09:16 02:06:54+02:00 File Access Date/Time : 2024:09:16 02:06:56+02:00 File Inode Change Date/Time : 2024:09:16 02:06:54+02:00 File Permissions : -rw-r--r-- File Type : PDF File Type Extension : pdf MIME Type : application/pdf PDF Version : 1.5 Linearized : No Page Count : 3 Creator : XeTeX output 2024.09.16:0006 Producer : xdvipdfmx (20240407) Create Date : 2024:09:16 00:06:53Z $ ../../pdf/pdf_remove_metadata.sh build/main.pdf $ exiftool build/main.pdf ExifTool Version Number : 12.76 File Name : main.pdf Directory : build File Size : 93 kB File Modification Date/Time : 2024:09:16 02:07:09+02:00 File Access Date/Time : 2024:09:16 02:07:10+02:00 File Inode Change Date/Time : 2024:09:16 02:07:09+02:00 File Permissions : -rw------- File Type : PDF File Type Extension : pdf MIME Type : application/pdf PDF Version : 1.5 Linearized : Yes Page Count : 3
This section contains scripts related to Node.js:
- clean_node
-
delete
node_modules
andpackage-lock.json
- dependency_check_node
-
check for dependency updates
- macos_node_modules_fix
-
exclude
node_modules
from Time Machine backups and Spotlight indexing
This script will delete both the node_modules
directory and the package-lock.json
file in the given directory ($PWD
if not given).
This is useful to get a clean slate after dependency updates.
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"clean:node": "scripts/clean_node.sh"
}
} $ npm run clean:node
$ npm i |
$ scripts/nodejs/clean_node.sh
$ scripts/nodejs/clean_node.sh /tmp/nodejs-example-project
This script checks for dependency updates of the Node.js project in the given directory ($PWD
if not given).
The optional second parameter determines whether NPM should ignore pre- or post-scripts (default: true
) while resolving the project’s dependencies.
đź’ˇ
|
Copy the script into your Node.js project and add it as a custom script to your package.json
{
...
"scripts": {
"dependency:updates": "scripts/dependency_check_node.sh"
}
} $ npm run dependency:updates |
đź’ˇ
|
For security reasons you might want to disable the pre- or post-scripts globally via: $ npm config set ignore-scripts true Use $ npm install --no-ignore-scripts You can show your current configuration via: $ npm config ls -l |
$ scripts/dependency_check_node.sh
$ scripts/dependency_check_node.sh /tmp/example
$ scripts/dependency_check_node.sh /tmp/example false
Package Current Wanted Latest Location Depended by Package Type Homepage
esbuild 0.23.0 0.23.0 0.23.1 node_modules/esbuild example devDependencies https://github.com/evanw/esbuild#readme
husky 9.1.4 9.1.4 9.1.5 node_modules/husky example devDependencies https://github.com/typicode/husky#readme
This script will exclude all node_modules
directories in the given directory ($PWD
if not given) and its subdirectories from Time Machine backups and prevent their Spotlight indexing.
đź’ˇ
|
Copy the script into your Node.js project and add it as a preinstall life cycle script to your package.json
{
...
"scripts": {
"preinstall": "scripts/macos–node-modules-fix.sh"
}
} For this to work, NPM should not ignore pre- or post-scripts (the default). For security reasons you might want to disable them globally via: $ npm config set ignore-scripts true Use $ npm install --no-ignore-scripts You can show your current configuration via: $ npm config ls -l Alternatively, you can use this script as custom script in your package.json
{
...
"scripts": {
"macos:fix-node-modules": "scripts/macos–node-modules-fix.sh"
}
} And execute it manually: $ npm install && npm run macos:fix-node-modules |
$ scripts/nodejs/macos–node-modules-fix.sh
$ scripts/nodejs/macos–node-modules-fix.sh /tmp/example_project
$ xattr -r /tmp/example_workspace
$ tree --noreport -a /tmp/example_workspace
/tmp/example_workspace
├── project1
│ └── node_modules
└── project2
└── node_modules
$ scripts/nodejs/macos–node-modules-fix.sh /tmp/example_workspace
$ xattr -r /tmp/example_workspace (1)
/tmp/example_workspace/project1/node_modules: com.apple.metadata:com_apple_backup_excludeItem
/tmp/example_workspace/project2/node_modules: com.apple.metadata:com_apple_backup_excludeItem
$ tree --noreport -a /tmp/example_workspace
/tmp/example_workspace
├── project1
│ └── node_modules
│ └── .metadata_never_index (2)
└── project2
└── node_modules
└── .metadata_never_index (2)
-
node_modules
directories are not backed up by Time Machine -
Spotlight indexing prevented
This section contains scripts related to PDFs:
- pdf_remove_metadata
-
removes metadata from a PDF file
This script removes the metadata from the given PDF file.
ℹ️
|
You can use $ exiftool test.pdf
ExifTool Version Number : 12.76
File Name : example.pdf
...
PDF Version : 1.4 |
$ scripts/pdf/pdf_remove_metadata.sh test.pdf
$ find dist -type f -name '*.pfd' -exec scripts/pdf/pdf_remove_metadata.sh {} \;
This section contains scripts related to Web development:
- compress_broli
-
compress a file with brotli
- compress_gzip
-
compress a file with gzip
- compress_zstd
-
compress a file with zstd
- create_build_info_js
-
create a JavaScript build information file
- create_build_info_json
-
create a JSON build information file
- create_build_info_ts
-
create a TypeScript build information file
- minify_css
-
minify CSS files
- minify_gif
-
minify GIF files
- minify_html
-
minify HTML files
- minify_jpeg
-
minify JPEG files
- minify_json
-
minify JSON files
- minify_json_tags
-
minify JSON-structured script tags
- minify_png
-
minify PNG files
- minify_robots
-
minify the robots.txt file
- minify_svg
-
minify SVG files
- minify_traffic_advice
-
minify the private prefetch proxy traffic control file
- minify_webmanifest
-
minify the web application manifest
- minify_xml
-
minify XML files
This script will compress the given file with brotli.
ℹ️
|
|
đź’ˇ
|
Here is a fragment to be placed into your
|
$ scripts/web/compress_broli.sh test.txt
$ find dist \( -type f -name '*.html' -o -name '*.css' \) -exec scripts/web/compress_broli.sh {} \;
This script will compress the given file with gzip.
đź’ˇ
|
Here is a fragment to be placed into your
|
$ scripts/web/compress_gzip.sh test.txt
$ find dist \( -type f -name '*.html' -o -name '*.css' \) -exec scripts/web/compress_gzip.sh {} \;
This script will compress the given file with zstd.
ℹ️
|
|
đź’ˇ
|
Here is a fragment to be placed into your
|
$ scripts/web/compress_zstd.sh test.txt
$ find dist \( -type f -name '*.html' -o -name '*.css' \) -exec scripts/web/compress_zstd.sh {} \;
This script will create a file with the given name containing build information accessible by JavaScript code.
ℹ️
|
The value of
|
ℹ️
|
The value of |
$ scripts/web/create_build_info_js.sh src/build-info.mjs
⇓
export const buildInfo = {
build: {
id: '1710116787',
time: '2024-03-11T00:26:27Z',
},
git: {
branch: 'main',
commit: {
id: '4768a3cf26cecc00a23be6acdf430809e4bb67a7',
time: '2024-03-11T00:25:48Z',
},
},
};
This script will create a JSON file with the given name containing build information.
ℹ️
|
The value of
|
ℹ️
|
The value of |
$ scripts/web/create_build_info_json.sh src/build-info.json
⇓
{"build":{"id":"1710116654","time":"2024-03-11T00:24:14Z"},"git":{"branch":"main","commit":{"id":"b530d501d059e1bbda58d96d78359014effa5584","time":"2024-03-11T00:22:45Z"}}}
This script will create a file with the given name containing build information accessible by TypeScript code.
ℹ️
|
The value of
|
ℹ️
|
The value of |
$ scripts/web/create_build_info_ts.sh src/build-info.ts
⇓
export type BuildInfo = {
// ...
};
export const buildInfo: BuildInfo = {
build: {
id: '1710116078',
time: '2024-03-11T00:14:38Z',
},
git: {
branch: 'main',
commit: {
id: '95189bb08fa918576f10339eb15303d152ade2aa',
time: '2024-03-10T23:52:54Z',
},
},
};
This script will minify and transpile the *.css
files in the given directory ($PWD
if not given) and its subdirectories.
This script uses browserslist to determine the transpilation targets.
ℹ️
|
|
đź’ˇ
|
If you do not want the defaults you have several options to change them. For example via the following file: .browserslistrc
|
$ scripts/web/minify_css.sh
$ scripts/web/minify_css.sh dist
This script will minify the *.gif
files in the given directory ($PWD
if not given) and its subdirectories.
ℹ️
|
|
đź’ˇ
|
If you are using macOS you might want to use ImageOptim instead of using this script. |
đź’ˇ
|
It is advisable to minimize image files before adding them to a Git repository. Minimizing image files during a build is usually bad idea unless the build generates images files. Also, you might want to add a hash to the minified image file before adding it to a Git repository. |
$ scripts/web/minify_gif.sh
$ scripts/web/minify_gif.sh dist
This script will minify the *.html
files in the given directory ($PWD
if not given) and its subdirectories.
ℹ️
|
|
$ scripts/web/minify_html.sh
$ scripts/web/minify_html.sh dist
This script will minify the *.jpg
and *.jpeg
files in the given directory ($PWD
if not given) and its subdirectories.
ℹ️
|
|
đź’ˇ
|
If you are using macOS you might want to use ImageOptim instead of using this script. |
đź’ˇ
|
It is advisable to minimize image files before adding them to a Git repository. Minimizing image files during a build is usually bad idea unless the build generates images files. Also, you might want to add a hash to the minified image file before adding it to a Git repository. |
$ scripts/web/minify_jpeg.sh
$ scripts/web/minify_jpeg.sh dist
This script will minify the *.json
files in the given directory ($PWD
if not given) and its subdirectories.
ℹ️
|
|
$ scripts/web/minify_json.sh
$ scripts/web/minify_json.sh dist
This script will minify JSON-structured script tags in the given HTML file.
<html>
…
<script type="importmap">
{
"imports": {
"utils": "/j/utils.mjs"
}
}
</script>
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Organization",
"url": "https://sdavids.de/"
}
</script>
…
</html>
⇓
<html>
…
<script type="importmap">{"imports":{"utils":"/j/utils.mjs"}}</script>
<script type="application/ld+json">{"@context":"https://schema.org","@type":"Organization","url":"https://sdavids.de/"}</script>
…
</html>
ℹ️
|
Afterward, you need to install the dependencies of this script: $ npm i --save-dev domutils dom-serializer htmlparser2 |
$ node scripts/web/minify_json_tags.mjs dist/index.html
$ find dist -type f -name '*.html' -exec node scripts/web/minify_json_tags.mjs {} \;
This script will minify the *.png
files in the given directory ($PWD
if not given) and its subdirectories.
đź’ˇ
|
If you are using macOS you might want to use ImageOptim instead of using this script. |
đź’ˇ
|
It is advisable to minimize image files before adding them to a Git repository. Minimizing image files during a build is usually bad idea unless the build generates images files. Also, you might want to add a hash to the minified image file before adding it to a Git repository. |
$ scripts/web/minify_png.sh
$ scripts/web/minify_png.sh dist
This script will minify the robots.txt
file in the given directory ($PWD
if not given).
$ scripts/web/minify_robots.sh dist
This script will minify the *.svg
files in the given directory ($PWD
if not given) and its subdirectories.
ℹ️
|
|
đź’ˇ
|
If you are using macOS you might want to use ImageOptim instead of using this script. |
đź’ˇ
|
It is advisable to minimize image files before adding them to a Git repository. Minimizing image files during a build is usually bad idea unless the build generates images files. Also, you might want to add a hash to the minified image file before adding it to a Git repository. |
$ scripts/web/minify_svg.sh
$ scripts/web/minify_svg.sh dist
This script will minify the private prefetch proxy traffic control file.
ℹ️
|
|
$ scripts/web/minify_traffic_advice.sh dist/.well-known/traffic-advice
This script will minify the given web application manifest file.
ℹ️
|
|
$ scripts/web/minify_webmanifest.sh dist/site.webmanifest
This script will minify the *.xml
files in the given directory ($PWD
if not given) and its subdirectories.
ℹ️
|
|
$ scripts/web/minify_xml.sh
$ scripts/web/minify_xml.sh dist
The functions need to be copied into an $FPATH directory.
âť—
|
The filename needs to match the name of the function. |
đź’ˇ
|
Example zsh setup: $ mkdir ~/.zfunc ~/.zshrc
readonly ext_func="${HOME}/.zfunc"
export FPATH="${ext_func}:${FPATH}"
for f in ${ext_func}; do
# shellcheck disable=SC2046
autoload -Uz $(ls "${f}")
done The functions should be copied into |
This section contains generally useful functions:
- color_stderr
-
color errors red
- ls_extensions
-
displays all file extensions
This function will display stderr output in red.
#!/usr/bin/env sh
echo 'error' >&2
$ color_stderr ./with-stderr-output.sh
error
ℹ️
|
GitHub unfortunately does not show the "error" above in red. |
This function will display all file extensions (case-insensitive) and their count in the given directory ($PWD
if not given) and its subdirectories.
$ ls_extensions
5 sh
$ ls_extensions /tmp/example
3 txt
1 png
$ tree --noreport -a /tmp/example
/tmp/example
├── a.b.txt
├── a.txt
├── b.TXT
└── d
├── .ignored
└── e.png
This section contains functions related to Git:
- ls_extensions_git
-
display all file extensions for tracked files
This function will display all file extensions (case-insensitive) of tracked files and their count in the given Git directory ($PWD
if not given) and its subdirectories.
đź’ˇ
|
This script, in conjunction with the ls_extensions script, is helpful in determining whether you have covered your files properly in your .gitattributes file. $ tree --noreport -a -I .git .
.
├── gradle
│ └── wrapper
│ └── gradle-wrapper.jar
└── gradlew.bat
$ ls_extensions
1 jar
1 bat
$ git check-attr -a gradlew.bat (1)
$ git check-attr -a gradle/wrapper/gradle-wrapper.jar
$ printf '*.bat text eol=crlf\n*.jar binary\n' > .gitattributes (2)
$ cat .gitattributes
*.bat text eol=crlf
*.jar binary
$ git check-attr -a gradlew.bat
gradlew.bat: text: set
gradlew.bat: eol: crlf
$ git check-attr -a gradle/wrapper/gradle-wrapper.jar
gradle/wrapper/gradle-wrapper.jar: binary: set
gradle/wrapper/gradle-wrapper.jar: diff: unset
gradle/wrapper/gradle-wrapper.jar: merge: unset
gradle/wrapper/gradle-wrapper.jar: text: unset
$ ls_extensions_git (3)
$ git add gradlew.bat gradle/wrapper/gradle-wrapper.jar (4)
$ ls_extensions_git (5)
1 jar
1 bat |
-
Both
gradlew.bat
andgradle-wrapper.jar
have no attributes set—​if we would add them to the Git index at this point they would not be handled properly by Git. -
Add the appropriate attributes for JAR and Windows batch files.
-
Nothing has been added to the Git index yet: So
ls_extensions_git
shows no file extensions. -
Add both files to the Git index.
-
Both file extensions will be reported once they are in the Git index.
$ ls_extensions_git
5 sh
$ ls_extensions_git /tmp/example
3 txt
1 png
$ tree --noreport -a -I .git /tmp/example
/tmp/example
├── a.b.txt
├── a.txt
├── b.TXT
├── d
│ ├── .ignored
│ └── e.png
└── out.txt
$ git ls-files
a.b.txt
a.txt
b.txt
d/.ignored
d/e.png
This section contains functions related to GitHub CLI:
- repo_new_gh
-
create and checkout a private GitHub repository
- repo_new_local
-
create a new local repository based on a GitHub template repository
- repo_publish_to_gh
-
publish an existing local repository to GitHub
This function will create and checkout a new private GitHub repository from a GitHub template repository with the given name.
âť—
|
You should change the template being used: zfunc/repo_new_gh
- readonly template='sdavids/sdavids-project-template'
+ readonly template='my-github-user/my-template' |
ℹ️
|
|
ℹ️
|
This script uses Git commit signing; you need to:
Alternatively, you can remove zfunc/repo_new_gh
git commit \
--quiet \
- --gpg-sign \
--signoff \ |
ℹ️
|
Sometimes GitHub is slow or you have bad latency. In that case you need to increase the time used to wait between GitHub interactions: zfunc/repo_new_gh
wait_for_github=10 |
$ repo_new_gh my-new-repo
This function will create a new local Git repository based on a GitHub template repository with the given name.
âť—
|
This function needs the GitHub $ gh auth refresh -h github.com -s delete_repo |
âť—
|
You should change the GitHub user and template being used: zfunc/repo_new_local
- readonly template='sdavids/sdavids-project-template'
+ readonly template='my-github-user/my-template'
- readonly gh_user_id='sdavids'
+ readonly gh_user_id='my-github-user' |
ℹ️
|
|
ℹ️
|
This script uses Git commit signing; you need to configure your local git config. Alternatively, you can remove zfunc/repo_new_local
git commit \
--quiet \
- --gpg-sign \
--signoff \ |
ℹ️
|
Sometimes GitHub is slow or you have bad latency. In that case you need to increase the time used to wait between GitHub interactions: zfunc/repo_new_local
wait_for_github=10 |
$ repo_new_local my-new-local-repo
This function will publish the existing local Git repository in the given directory ($PWD
if not given) to GitHub as a private repository.
The optional second parameter is the remote name for the repository (defaults to the directory name if not given).
âť—
|
Only the default branch (default: |
ℹ️
|
|
ℹ️
|
Sometimes GitHub is slow or you have bad latency. In that case you need to increase the time used to wait between GitHub interactions: zfunc/repo_publish_to_gh
wait_for_github=10 |
$ pwd
/tmp/first
$ repo_publish_to_gh
âś“ Created repository sdavids/first on GitHub
https://github.com/sdavids/first
...
$ repo_publish_to_gh /tmp/second
âś“ Created repository sdavids/second on GitHub
https://github.com/sdavids/second
...
$ repo_publish_to_gh /tmp/third different
âś“ Created repository sdavids/different on GitHub
https://github.com/sdavids/different
...
This section contains functions related to Java:
- jar_is_multi_release
-
display whether a JAR is a multi-release JAR
- jar_manifest
-
display the manifest of a JAR
This function will display whether the given JAR file is a multi-release JAR file (1
) or not (0
).
ℹ️
|
The exit code of this function is the inverse of the displayed value. |
$ curl -L -O -s https://repo1.maven.org/maven2/org/junit/jupiter/junit-jupiter-api/5.11.0/junit-jupiter-api-5.11.0.jar
$ jar_is_multi_release junit-jupiter-api-5.11.0.jar
0
$ echo $?
1
$ curl -L -O -s https://repo1.maven.org/maven2/net/bytebuddy/byte-buddy/1.15.0/byte-buddy-1.15.0.jar
$ jar_is_multi_release byte-buddy-1.15.0.jar
1
$ echo $?
0
$ jar_manifest byte-buddy-1.15.0.jar | grep Multi
Multi-Release: true
This function will display the manifest of the given JAR file.
$ jar_manifest apiguardian-api-1.1.2.jar
Manifest-Version: 1.0
Bnd-LastModified: 1624798392241
Build-Date: 2021-06-27
Build-Revision: aa952a1b9d5b4e9cc0af853e2c140c2455b397be
Build-Time: 14:53:10.089+0200
Built-By: @API Guardian Team
Bundle-Description: @API Guardian
Bundle-DocURL: https://github.com/apiguardian-team/apiguardian
Bundle-ManifestVersion: 2
Bundle-Name: apiguardian-api
Bundle-SymbolicName: org.apiguardian.api
Bundle-Vendor: apiguardian.org
Bundle-Version: 1.1.2
Created-By: 11.0.11 (AdoptOpenJDK)
Export-Package: org.apiguardian.api;version="1.1.2"
Implementation-Title: apiguardian-api
Implementation-Vendor: apiguardian.org
Implementation-Version: 1.1.2
Require-Capability: osgi.ee;filter:="(&(osgi.ee=JavaSE)(version=1.6))"
Specification-Title: apiguardian-api
Specification-Vendor: apiguardian.org
Specification-Version: 1.1.2
Tool: Bnd-5.3.0.202102221516
This section contains functions related to Gradle:
- gradle_new_java_library
-
creates a new Gradle Java library project with sensible, modern defaults
This function will create a new Gradle Java library project with sensible, modern defaults and the given name.
The optional second parameter is the directory ($PWD
if not given) the project is created in.
ℹ️
|
|
ℹ️
|
A Git repository will also be initialized for the project if A Git This script uses Git commit signing; you need to:
Alternatively, you can remove zfunc/gradle_new_java_library
git commit \
--quiet \
- --gpg-sign \
--signoff \ |
đź’ˇ
|
The generated default package will be You can change the default by adding printf 'org.gradle.buildinit.source.package=my.org' >> "${GRADLE_USER_HOME:=${HOME}}/gradle.properties" If you want no comments to be generated add printf 'org.gradle.buildinit.comments=false' >> "${GRADLE_USER_HOME:=${HOME}}/gradle.properties" |
đź’ˇ
|
You might want to customize the defaults for the created zfunc/gradle_new_java_library
cat << 'EOF' >gradle.properties
...
EOF
cat << 'EOF' >.gitignore
...
EOF
cat << 'EOF' >.gitattributes
...
...
EOF
cat << 'EOF' >.editorconfig
...
EOF |
$ gradle_new_java_library example-java-library
$ gradle_new_java_library other-java-library /tmp
$ tree --noreport -a -I .git .
.
├── .editorconfig
├── .git-blame-ignore-revs
├── .gitattributes
├── .githooks
│ └── pre-commit
├── .gitignore
├── gradle
│ ├── libs.versions.toml
│ └── wrapper
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
├── gradle.properties
├── gradlew
├── gradlew.bat
├── lib
│ ├── build.gradle.kts
│ └── src
│ ├── main
│ │ ├── java
│ │ │ └── org
│ │ │ └── example
│ │ │ └── Library.java
│ │ └── resources
│ └── test
│ ├── java
│ │ └── org
│ │ └── example
│ │ └── LibraryTest.java
│ └── resources
└── settings.gradle.kts
$ git status
On branch main
nothing to commit, working tree clean
Apache License, Version 2.0 (Apache-2.0.txt or https://www.apache.org/licenses/LICENSE-2.0).
We abide by the Contributor Covenant, Version 2.1 and ask that you do as well.
For more information, please see Code of Conduct.
âť—
|
After initializing this repository you need to install the Git hooks via: $ git config core.hooksPath .githooks And configure the ignore-revs-file: git config blame.ignoreRevsFile .git-blame-ignore-revs |
$ sudo apt-get install curl
Install Docker.
âť—
|
Ensure that you install version Version |
|
Unfortunately, homebrew provides |
$ curl -L https://github.com/OpenVPN/easy-rsa/releases/download/v3.1.7/EasyRSA-3.1.7.tgz -o ~/Downloads/easy-rsa.tgz
$ tar -xzf ~/Downloads/easy-rsa.tgz -C ~/.local/share
$ mv ~/.local/share/EasyRSA-3.1.7 ~/.local/share/easyrsa
$ ln -s ~/.local/share/easyrsa/easyrsa ~/.local/bin/easyrsa
$ rm ~/Downloads/easy-rsa.tgz
There are several different JDKs and multiple options of installing them.
The recommended way is to install via SDKMAN!:
$ sdk install java
First install a JDK.
There are multiple options of installing Gradle.
The recommended way is to install via SDKMAN!:
$ sdk install gradle
if command -v fnm >/dev/null 2>&1; then
eval "$(fnm env --use-on-cd)"
fi
export NVM_DIR="${HOME}/.nvm"
[ -s "${NVM_DIR}/nvm.sh" ] && . "${NVM_DIR}/nvm.sh"
[ -s "${NVM_DIR}/bash_completion" ] && . "${NVM_DIR}/bash_completion"
if command -v nvm >/dev/null 2>&1; then
autoload -U add-zsh-hook
load-nvmrc() {
local nvmrc_path="$(nvm_find_nvmrc)"
if [ -n "${nvmrc_path}" ]; then
local nvmrc_node_version=$(nvm version "$(cat "${nvmrc_path}")")
if [ "${nvmrc_node_version}" = "N/A" ]; then
nvm install
elif [ "${nvmrc_node_version}" != "$(nvm version)" ]; then
nvm use
fi
elif [ -n "$(PWD=$OLDPWD nvm_find_nvmrc)" ] && [ "$(nvm version)" != "$(nvm version default)" ]; then
echo 'Reverting to nvm default version'
nvm use default
fi
}
add-zsh-hook chpwd load-nvmrc
load-nvmrc
fi