Linux Travel Kit (2024)

Last updated on 9/1/2024

Refer to local files here, and to technical Linux info here.

Contents

Hardware

Hardware commands

Use 'lshw short', or 'lshw'. If not installed do 'sudo apt-get install lshw'. Also: 'lscpu', 'lsblk -a', 'lsusb', 'lspci -tvvv'.

Angkor/Angkor2/Angkor3

Use 'uname -m' to discover the Ubuntu architecture installed.

Hardware Ubuntu

Hardware recognition via lshw. E.g. lshw: You can see which kernel modules are loaded via lsmod. You can see iwlagn, iwlcore and rfkill.

To find out which wireless devices are in range you can do iwlist wlan0 scan. You can configure: iwconfig wlan0 channel 6. Or iwconfig wan0 essid PwCGuestw ap any. Then try dhclient.

BlackBetty

Dell.

BlackTiger (the noisy one)

Clevo Co - Taiwan - NS50MU model.

Finding out

Use Tuxedo control centre (Gnome), Gnome software, Gnome tweaks.

Be aware that the kernel version (what the kernel was compiled for) is something else than the cpu architecture (physical reality).

Command: hostnamectl. Returns: To display cpu details: cat /proc/cpuinfo, or lscpu.

Comments (https://askubuntu.com/questions/650867/what-is-the-difference-between-amd64-and-linux-64-versions) Actually AMD were the first to come up with a 64-bit capable x86 chip, hence at the beginning it was called AMD64, as Intel followed suit and also made their x86 chips 64-bits capable, the architecture changed name to x86_64 (even though each company have their own name for their implementation of the architecture).

GrayTiger/Bullseye

Basics

Clevo Co - Taiwan - NL51NU model.

Command: hostnamectl. Returns:

To change the hostname: hostnamectl set-hostname new_hostname

To display cpu details: cat /proc/cpuinfo, or lscpu.

Has 16 processor cores.

cat /proc/cpuinfo returns 16 times:

Info about the AMD Ryzen 7 5700U See also local files:

TEE info

Info about the AMD TEE and its driver in the Linux kernel Regarding AMD processor TEE capability: lshw shows:
*-generic
                description: Encryption controller
                product: Family 17h (Models 10h-1fh) Platform Security Processor
                vendor: Advanced Micro Devices, Inc. [AMD]
                physical id: 0.2
                bus info: pci@0000:05:00.2
                version: 00
                width: 32 bits
                clock: 33MHz
                capabilities: bus_master cap_list
                configuration: driver=ccp latency=0
                resources: irq:38 memory:d0300000-d03fffff memory:d04cc000-d04cdfff

Sound

lspci command

lspci | grep Audio
05:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device 1637
05:00.5 Multimedia controller: Advanced Micro Devices, Inc. [AMD] Raven/Raven2/FireFlight/Renoir Audio Processor (rev 01)
05:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) HD Audio Controller
=> 3 devices, 05:00.*

ALSA aplay command

arecord, aplay are the command-line sound recorder and player for ALSA soundcard driver
aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: Generic [HD-Audio Generic], device 7: HDMI 1 [HDMI 1]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Generic_1 [HD-Audio Generic], device 0: ALC293 Analog [ALC293 Analog]
  Subdevices: 0/1
  Subdevice #0: subdevice #0
=> 2 cards, first card has 2 HDMI devices, second card has ALC293 device

/dev/snd devices

sudo fuser -v /dev/snd/*
                     USER        PID ACCESS COMMAND
/dev/snd/controlC0:  marc       1679 F.... pulseaudio
/dev/snd/controlC1:  marc       1679 F.... pulseaudio
/dev/snd/pcmC1D0p:   marc       1679 F...m pulseaudio
=> 3 devices

PulseAudio commands: pacmd - 2 cards available

=> 2 cards

List sound cards: 'pactl list short cards', and to list everything: 'pacmd list-cards'
pacmd list-cards
2 card(s) available.
    index: 0
	name: <alsa_card.pci-0000_05_00.1>
	driver: <module-alsa-card.c>
	owner module: 6
	properties:
		alsa.card = "0"
		alsa.card_name = "HD-Audio Generic"
		alsa.long_card_name = "HD-Audio Generic at 0xd04c8000 irq 93"
		alsa.driver_name = "snd_hda_intel"
		device.bus_path = "pci-0000:05:00.1"
		sysfs.path = "/devices/pci0000:00/0000:00:08.1/0000:05:00.1/sound/card0"
		device.bus = "pci"
		device.vendor.id = "1002"
		device.vendor.name = "Advanced Micro Devices, Inc. [AMD/ATI]"
		device.product.id = "1637"
		device.string = "0"
		device.description = "HD-Audio Generic"
		module-udev-detect.discovered = "1"
		device.icon_name = "audio-card-pci"
	profiles:
		output:hdmi-stereo: Digital Stereo (HDMI) Output (priority 5900, available: no)
		output:hdmi-surround: Digital Surround 5.1 (HDMI) Output (priority 800, available: no)
		output:hdmi-surround71: Digital Surround 7.1 (HDMI) Output (priority 800, available: no)
		output:hdmi-stereo-extra1: Digital Stereo (HDMI 2) Output (priority 5700, available: no)
		output:hdmi-surround-extra1: Digital Surround 5.1 (HDMI 2) Output (priority 600, available: no)
		output:hdmi-surround71-extra1: Digital Surround 7.1 (HDMI 2) Output (priority 600, available: no)
		off: Off (priority 0, available: unknown)
	active profile: <off>
	ports:
		hdmi-output-0: HDMI / DisplayPort (priority 5900, latency offset 0 usec, available: no)
			properties:
				device.icon_name = "video-display"
		hdmi-output-1: HDMI / DisplayPort 2 (priority 5800, latency offset 0 usec, available: no)
			properties:
				device.icon_name = "video-display"
    index: 1
	name: <alsa_card.pci-0000_05_00.6>
	driver: <module-alsa-card.c>
	owner module: 7
	properties:
		alsa.card = "1"
		alsa.card_name = "HD-Audio Generic"
		alsa.long_card_name = "HD-Audio Generic at 0xd04c0000 irq 94"
		alsa.driver_name = "snd_hda_intel"
		device.bus_path = "pci-0000:05:00.6"
		sysfs.path = "/devices/pci0000:00/0000:00:08.1/0000:05:00.6/sound/card1"
		device.bus = "pci"
		device.vendor.id = "1022"
		device.vendor.name = "Advanced Micro Devices, Inc. [AMD]"
		device.product.id = "15e3"
		device.product.name = "Family 17h (Models 10h-1fh) HD Audio Controller"
		device.string = "1"
		device.description = "Family 17h (Models 10h-1fh) HD Audio Controller"
		module-udev-detect.discovered = "1"
		device.icon_name = "audio-card-pci"
	profiles:
		input:analog-stereo: Analog Stereo Input (priority 65, available: unknown)
		output:analog-stereo: Analog Stereo Output (priority 6500, available: unknown)
		output:analog-stereo+input:analog-stereo: Analog Stereo Duplex (priority 6565, available: unknown)
		off: Off (priority 0, available: unknown)
	active profile: <output:analog-stereo+input:analog-stereo>
	sinks:
		alsa_output.pci-0000_05_00.6.analog-stereo/#0: Family 17h (Models 10h-1fh) HD Audio Controller Analog Stereo
	sources:
		alsa_output.pci-0000_05_00.6.analog-stereo.monitor/#0: Monitor of Family 17h (Models 10h-1fh) HD Audio Controller Analog Stereo
		alsa_input.pci-0000_05_00.6.analog-stereo/#1: Family 17h (Models 10h-1fh) HD Audio Controller Analog Stereo
	ports:
		analog-input-internal-mic: Internal Microphone (priority 8900, latency offset 0 usec, available: unknown)
			properties:
				device.icon_name = "audio-input-microphone"
		analog-input-headset-mic: Headset Microphone (priority 8800, latency offset 0 usec, available: no)
			properties:
				device.icon_name = "audio-input-microphone"
		analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
			properties:
				device.icon_name = "audio-speakers"
		analog-output-headphones: Headphones (priority 9900, latency offset 0 usec, available: no)
			properties:
				device.icon_name = "audio-headphones"

alsamixer command

Gives info and allows mixing.

Clevo hardware issues

Ventilator/fan

From linux-service : check if Tuxedo cntl center is present and if so, remove: sudo apt-get remove tuxedo-control-center. Reboot.

MLS actions:

QEMU

For general info refer to local QEMU info,

Installation on Debian - GrayTiger - packages

General Debian info at https://wiki.debian.org/QEMU, suggests three packages:

Installation on Debian - building on GrayTiger

See https://www.qemu.org/docs/master/devel/build-system.html for a description of the two stages build: 'There is little/no similarities with GNU autotools, so try to forget what you know about them.'

Qemu uses Meson. Refer to basic local Meson info.

There's a description at https://www.qemu.org/docs/master/devel/build-system.html which states: Testing is described at https://www.qemu.org/docs/master/devel/testing.html. Do 'make check'.

Qemu handson on GrayTiger

What have we got

After installation, you find a.o. Qemu-system documentation

Hands-on 1 qemu commands

Once you managed to start qemu (ref below), qemu -help gives you what commands are supported. See the list here on GrayTiger.

Hands-on 2 starting qemu

Starting qemu is with:
qemu-system-x86_64 [machine opts] \
                [cpu opts] \
                [accelerator opts] \
                [device opts] \
                [backend opts] \
                [interface opts] \
                [boot opts]

Run 'qemu-system-x86_64 -version' - returns:
qemu-system-x86_64 -version
QEMU emulator version 5.2.0 (Debian 1:5.2+dfsg-11+deb11u3)
Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers

Run 'qemu-system-x86_64' - runs but complains there's nothing to boot.

Run 'qemu-system-x86_64 linux.img' - runs but complains linux.img does not exist. Where should I put it?

See file:///usr/share/doc/qemu-system-common/system/invocation.html for parameters on starting qemu.

Hands-on 3 make a VDI disk

Hands-on - learn to make a VDI disk See https://www.baeldung.com/linux/qemu-from-terminal

Sidebar: QCOW2 is QEMU’s native disk format,

Hands-on 4 qemu-img to make a VDI disk and load an OS

See https://www.baeldung.com/linux/qemu-from-terminal

Hands-on 5 make a VDI disk and load complete system with KVM OS

Command:
$ qemu-system-x86_64 \
-enable-kvm                                                    \
-m 4G                                                          \
-smp 2                                                         \
-hda myVirtualDisk.qcow2                                       \
-boot d                                                        \
-cdrom linuxmint-21.1-cinnamon-64bit.iso                       \
-netdev user,id=net0,net=192.168.0.0/24,dhcpstart=192.168.0.9  \
-device virtio-net-pci,netdev=net0                             \
-vga qxl                                                       \
-device AC97
The meaning of each option:
    -enable-kvm → KVM to boost performance
    -m 4G → 4GB RAM
    -smp 2 → 2CPUs
    -hda myVirtualDisk.qcow2 → our 20GB variable-size disk
    -boot d → boots the first virtual CD drive
    -cdrom linuxmint-21.1-cinnamon-64bit.iso → Linux Mint ISO
    -netdev user,id=net0,net=192.168.0.0/24,dhcpstart=192.168.0.9 → NAT with DHCP
    -device virtio-net-pci,netdev=net0 → network card
    -vga qxl → powerful graphics card
    -device AC97 → sound card

Installation on Debian - BlackTiger as part of OPTEE

See OPTEE with Qemu installation.

General installation info

Starting point Qemu Linux wiki.

As per https://www.qemu.org/download/#linux Understanding what got installed #1:'dpkg -L qemu-system' to list all files: see /usr/share/doc/qemu-system ==> but that's just documentation Understanding what got installed #2: search for qemu in / and you see it's all over the place: usr/lib, usr/bin, ...

QEMU’s system emulation provides a virtual model of a machine (CPU, memory and emulated devices) to run a guest OS. It supports a number of hypervisors (known as accelerators) as well as a JIT known as the Tiny Code Generator (TCG) capable of emulating many CPUs. On Linux it supports KVM and Xen as hypervisors/accelerators, for multiple architectures.

QEMU provides a number of management interfaces including

Running

Running with OPTEE

See OPTEE info.

General info on running

See https://www.qemu.org/docs/master/system/introduction.html#running

What you try to emulate is the 'target', see https://www.qemu.org/docs/master/system/targets.html#system-targets-ref, these include Arm, Mips, PowerPC, OpenRISC, RISC-V, S390x, SPARC, x86, ...

Running SGX or SEV

Under an x86 target, you may include Intel SGX or AMD Secure Encrypted Virtualisation (SEV): Use 'qemu-system-x86_64 -M help' to find out what is supported with that target.

LEGACY

OPTEE deployment: see e.g. OPTEE info.

How to run OP-TEE using QEMU for Armv7-A and Armv8-A:

Fonts

Fonts on Debian

General use of fonts on Debian: debian wiki.

Which fonts are installed and where do they go?

Calibri - Windows proprietary

Problems with Calibri, which is a Windows font: https://wiki.debian.org/SubstitutingCalibriAndCambriaFonts For Calibri: use Synaptics to install the Carlito replacement. Latex still fails to use it.

See also Libre Office in LTK.

Printers

Printing with CUPS

How to print and manage a printer...

Cups

Finding out about cups: Starting/stopping cups:

Troubleshooting

What's my printer called: 'lpstat -t' indicates Canon_TS_8300_printer_C4, linked to device dnssd://Canon........

Restarting/enabling printer: cups browser interface complains: 'forbidden'. Alternative: Debian 'Settings' - Printer - restart - works fine.

Commands

Cmds:

CUPS

See https://wiki.debian.org/SystemPrinting, and https://wiki.debian.org/CUPSDriverlessPrinting#debian. Debian 11 (bullseye) is geared up to auto-setup network and USB print queues with cups-browsed. Should auto-set fail and debugging cups-browsed is an unattactive proposition or there is a more complex situation to resolve, a manual queue setup with lpadmin, the CUPS web interface or system-config-printer is recommended. There is a daemon, cups-browsed, configured in the cups-browsed.conf file, located in the /etc/cups directory. As the config file is present, it seems CUPS is installed. Point your browser to localhost:631. Supply e.g. root as user and the corresponding password.

Useful commands: Use 'avahi-browse -art' for discovery too.

Linux kernel

Kernel intro

See local kernel info. See Wikipedia's kernel info. Lists of what packages are in the Debian kernel: Kernel sources: Debian provides trusted firmware eg for arm, see https://packages.debian.org/bullseye/arm64/arm-trusted-firmware/filelist. This is "secure world" software for ARM SoCs - firmware 2.4+dfsg-2: arm64. Rust in the kernel: https://www.kernel.org/doc/html/latest/rust/index.html

Kernel related utilities/commands

Which kernel is running: 'uname -r', returns e.g.
5.18.0-0.deb11.3-amd64

Which kernel is running: 'uname -a', returns e.g.
Linux GrayTiger 5.18.0-0.deb11.3-amd64 #1 SMP PREEMPT_DYNAMIC Debian 5.18.14-1~bpo11+1 (2022-07-28) x86_64 GNU/Linux

sysctl ('sudo systcl -a'): utility that reads and modifies the attributes of the system kernel such as its version number, maximum limits, and security settings

You can see which kernel modules are loaded via sudo lsmod. You can see e.g. iwlagn, iwlcore and rfkill. See https://en.wikipedia.org/wiki/Modprobe.

The 'dmesg' command displays kernel boot time messages.

Also 'cat /proc/version' lists your kernel.

Also 'dmesg | grep Linux' (?)

Updating Bullseye's kernel

Dag, deze nieuwe amd notebook heeft een nieuwere kernel nodig. Doe een terminal open. guy Remark: http://95.211.190.99/ corresponds to ubuntushop website. debkernel.sh contains the following:
#!/bin/bash
echo "deb http://deb.debian.org/debian bullseye-backports main contrib non-free" | tee /etc/apt/sources.list.d/bullseye-backports.list
#deb-src http://deb.debian.org/debian bullseye-backports main contrib non-free
apt-get update
apt-get -y install -t bullseye-backports linux-image-amd64
apt-get -y install -t bullseye-backports firmware-linux firmware-linux-nonfree firmware-iwlwifi
apt-get -y install linux-headers-$(uname -r)
rm  debkernel.sh
Execution yields: 'possibly missing firmware ....series...'

You find AMD firmware in /usr/lib/firmware

Loadable kernel modules

Loadable kernel modules in Linux are loaded (and unloaded) by the modprobe command. They are located in /lib/modules or /usr/lib/modules and have had the extension .ko ("kernel object") since version 2.6 (previous versions used the .o extension). The lsmod command lists the loaded kernel modules.

In emergency cases, when the system fails to boot due to e.g. broken modules, specific modules can be enabled or disabled by modifying the kernel boot parameters list (for example, if using GRUB, by pressing 'e' in the GRUB start menu, then editing the kernel parameter line).

See also https://en.wikipedia.org/wiki/Loadable_kernel_module

Kernel creation

Kernel creation

See https://www.kernel.org/doc/html/latest/admin-guide/quickly-build-trimmed-linux.html

Kernel creation - legacy

In a nutshell: you need to have the source rpms installed to have the kernel source tree in /usr/src/linux, you need to run the make xconfig and other makes, and you need to reconfigure LILO.
 

Redhat: Getting the sourcetree in place. Start from InfoMagicGreen9612, CD 1, directory /SRPMS, which contains a file called kernel-2.0.18-5.src.rpm.

Unfortunately, glint refuses to read it, while a manual browse shows all the rpms. Have a look in the Kernel-HOWTO (however, assumes you have to ftp the kernel in tar format over the Internet).

So let's go for manual install. If you peek in /usr/src/redhat and /linux, you find that the sources are apparently expected here. So et's try 'rpm -i /mnt/SRPMS/kernel-2.0.18-5.src.rpm'.

No message comes back whatever. Let's do 'rpm -qa | less': this only shows kernel 2.0.18-5, which is the executable format. Glint does not show me any source, and running 'rpm -V kernel-2.0.18-5.src.rpm' says it's not installed.

So what? Well:

Lesson learnt: glint does not show you this tar.gz file anywhere, you have to manually work your way through the rpm -ivv / gunzip / tar command...

Running make according to the RedHat 4.0 Manual. Position yourself at /usr/src/linux, go. 'make mrproper' results in error ARCH2 'make config' results in the familiar question and answer game... New kernel will be written to .................
 

Making your new kernel bootable via LILO Edit /etc/lilo.conf, provide a label and a pointer to your new kernel. Run lilo.

Linux kernel consoles

Refer also to Linux console - Wikipedia.

The Linux console provides a way for the kernel and other processes to output text-based messages to the user, and to receive text-based input from the user. In Linux, several devices can be used as system console: a virtual terminal, serial port, USB serial port, VGA in text-mode, and framebuffer.

During kernel boot, the console displays the kernel boot log. At this point in time, the kernel is the only software running, and logging via user-space (e.g. syslog) is not possible. Once the kernel has finished booting, it runs the init process (also sending output to the console), which handles booting of the rest of the system including starting any background daemons.

After the init boot process is complete, the console will be used to multiplex multiple virtual terminals (accessible by pressing Ctrl-Alt-F1, Ctrl-Alt-F2 etc., Ctrl-Alt-LeftArrow, Ctrl-Alt-RightArrow, or using chvt). On each virtual terminal, a getty process is run, which in turn runs /bin/login to authenticate a user. After authentication, a command shell will be run. Virtual terminals, like the console, are supported at the Linux kernel level.

Linux audio kernel

Refer e.g. to Linux audio intro - Ted Felix.

Linux audio on GrayTiger

As per Ted Felix, running uname -a reveals:
uname -a
Linux GrayTiger 5.18.0-0.deb11.3-amd64 #1 SMP PREEMPT_DYNAMIC Debian 5.18.14-1~bpo11+1 (2022-07-28) x86_64 GNU/Linux
So the 'PREEMPT_DYNAMIC' should indicate this is a low-latency kernel.

Audio software needs to run at a higher priority and with memory locked so that it doesn't swap out to the hard disk. To give a user that power, we need an "audio" group, give that group some special privileges, then add the user to that group. Check if GrayTiger has an audio group: grep audio /etc/group
grep audio /etc/group
audio:x:29:pulse,marc
The limits for the audio group can usually be found in /etc/security/limits.d/audio.conf. Indeed, this is present on GrayTiger.

Program execution and the PATH variable

A program can be started from a running shell.
  • If the program is a Linux command, utility or a program which is on the PATH, simply invoke it
  • If the program resides in the directory where you are, invoke it './programname'
  • If the program resides somewhere in a specific directory (and you're not there) which is not on the PATH, either:
    • invoke it by specifying the full path and the program's name
    • append/prepend the program's path to the PATH variable, ref below
See further below, and also https://unix.stackexchange.com/questions/26047/how-to-correctly-add-a-path-to-path

bash and environment variables

Basics

For login and scripting, Debian relies on bash. Bash is a sh-compatible command interpreter. Info via 'man bash' and in /usr/share/doc/packages/bash. When bash is invoked as sh, it tries to mimic the behaviour of older versions of sh, and it will not consider any tailoring from start-up files.

Refer also to local bash and shells info.

Personalisation

GreyTiger has following:
  • .profile - # set PATH so it includes user's private bin if it exists</li>
  • .bashrc - # ~/.bashrc: executed by bash for non-login shells.
  • .bash_history - history of your bash commands
  • .bash_logout - # ~/.bash_logout: executed by bash when login shell exits => clears the screen
Legacy: personalisation:
  • standard way - for all users: via /etc/profile
  • standard way - for individual user such as root: via ~/.bash_profile (and further via ~/bash_login and ~/.profile)
  • SuSE way - for individual user: via /etc/profile.local
  • CONCLUSION: for the time being I'll personalise via /root/.bash_profile

Environment variables (echo, env, export cmds)

Programs run by the shell may define variables and export them or not.

Exported variables such as $HOME and $PATH are available to (inherited by) other programs run by the shell that exports them (and the programs run by those other programs, and so on) as environment variables.

Non-exported (regular) variables are not available to other programs.

Reading environment variables

You can read the value of the environment variables by using 'env'. If you want to see the value of one particular variable, do e.g. 'echo $CLASSPATH'. If nothing comes back, the variable is not set.

How to set environment variables

See e.g. https://www.baeldung.com/linux/bash-variables-export. Approaches for setting environment variables:
  • Naive approach (destroying current setting): you can set e.g. the CLASSPATH variable using: export CLASSPATH=/path/to/your/classes, or you can use -cp option to run your class: java -cp /path/to/your/classes foo.bar.MainClass. In case the variable had any value already, that value is gone. If you set PATH like this, all your usual commands etc are gone...
  • Appending/prepending is better
  • Appending
    • Temporary solution: appending by executing an export command in your shell: 'export PATH=$PATH:~/new/privatepath'
    • Persistent solution: modify PATH in ~/.profile, or in ~/.bash_profile - see below
  • Prepending
    • Temporary solution: prepending by executing an export command in your shell: 'export PATH=~/new/privatepath:$PATH'
    • Persistent solution: modify PATH in ~/.profile, or in ~/.bash_profile - see below
You don't need export if the variable is already in the environment: any change of the value of the variable is reflected in the environment. PATH is pretty much always in the environment; all unix systems set it very early on.

GrayTiger's /home/marc/.profile file

Contains
# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
    PATH="$HOME/bin:$PATH"
fi
Approach taken for OPTEE: insert 'PATH=$PATH:~/new/privatepath' in /home/marc/.profile file.

Piping and passing on command output

Passing output of a command to a file: $ ls > lsout.txt .

Listing of all the files and directories and pass the input to the more command: $ ls -l | more .

Tracing the execution of a (bash) script or program

Bash doesn’t provide any built-in debugger. However, there are commands and constructs that are helpful, including the set and trap commands.

Refer also to Baeldung on bash debugging.

Minimalistic approach: bash -v invokes verbosity during execution.

Refer also to Linux basics.

Linux commands

See https://en.wikipedia.org/wiki/List_of_Unix_commands

Use 'apropos command' or 'appropos ... however e.g. 'apropos export' lists lot of stuff, excluding the regular export command...

List all commands: 'compgen -c', list aliases 'compgen -a'.

An alias may be defined for a command, list the currently defined aliases with alias -p.

grep

The grep command (global regular expression print)

Sample use:
  • grep 'word' filename
  • grep 'word' file1 file2 file3
  • grep 'string1 string2' filename
  • cat otherfile | grep 'something'
  • command | grep 'something'
  • command option1 | grep 'data'
  • grep --color 'data' fileName
Recursive use: '$ grep -r 'optee' /home/marc'

There's also ngrep (Starting point: RFC 1470. ) and egrep, ...

find

whereis

To find out where 'make' resides: $ whereis make.

tail

$ tail -f /var/log/mail.log | grep user@domain.tld

expect

The expect command runs 'Expect scripts' using the following syntax: expect [options] [commands/command file].

The Except program uses the following keywords to interact with other programs:
  • spawn Creates a new process
  • send Sends a reply to the program
  • expect Waits for output
  • interact Enables interacting with the program
Expect uses TCL (Tool Command Language) to control the program flow and essential interactions.

Some systems do not include Expect by default.

To install it with apt on Debian-based systems, run sudo apt install expect.

Package management

Package formats

The package you install must match your distribution, Linux version and architecture.

Package repositories

Tools such as Apt download packages from one or more software repositories (sources) and install them onto your computer.

A repository is generally a network server, such as the official DebianStable repository. Local directories or CD/DVD are also accepted.

See https://wiki.debian.org/SourcesList. DebianStable is the official Debian repository for the current release

The main Apt sources configuration file is at /etc/apt/sources.list. You can edit this file (as root) using your favorite text editor. See the sources.list manual page for more info.

To add custom sources, creating separate files under /etc/apt/sources.list.d/ in DEB822 source format is preferred. See the deb822 manual page for more info.

Linux versions

To find out Linux version:
  • Use lsb_release -a
  • Use uname -a BlackTiger runs '5.10.0-0.bpo.5-amd64'
  • GrayTiger runs Linux 5.18.0-0.bpo.1-amd64
Or cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Hence:
  • IntelRCS runs Debian 8 Jessie - architecture is AMD
  • BlackMamba runs Kali, codename 'kali-rolling', which continuously feeds from Debian testing, full name Linux kali 5.3.0-kali-3-amd64 #1 SMP Debian 5.3.15-1kali1 (2019-09)
  • Angkor3 is running Ubunt 12.10, or 'Quantal'
  • DemasiadoCorazon on 14.04
  • BlackTiger runs Debian 10 Buster, architecture is AMD
  • GrayTiger runs Debian 11 Bullseye, architecture is AMD
For Ubuntu: http://ubuntuguide.org. As Ubuntu relies on Debian, the following is worth reading: http://www.debian.org/doc/debian-policy/#contents

Debian dpkg / apt-*

Basics: dpkg

Debian package manager dpkg is foundation for installing '.deb' packages. You can find out which version you are having by 'dpkg --version'. E.g. GrayTiger: 'Debian 'dpkg' package management program version 1.20.11 (amd64).'

dpkg
  • can be used directly e.g. dpkg --install package-file
  • as frontend to dpkg-deb and dpkg-query
  • is used by
    • apt-*, a front-end for dpkg, apt is the name of the package containing a.o. apt-get (and there is also apt-rpm for Red Hat)
    • Xdselect
    • Xsynaptic
    • Xgnome-app-install
    • Xadept, a KDE package manager
    • aptitude, a terminal based package manager

gnome-software

Since Gnome 3, this is a user-friendly utility. You can start it with 'gnome-software' on the cli.

APT etc

APT is a collection of tools distributed in a package named apt. It can be considered a front-end to dpkg. While dpkg performs actions on individual packages, APT manages relations (especially dependencies) between them, as well as sourcing and management of higher-level versioning decisions (release tracking and version pinning).

APT uses a location configuration file (/etc/apt/sources.list) to locate the desired packages.

Historically, there were many commands such as apt-get, apt-cache and apt-config, all with their specific capabilities. APT was created to make the life of the average user more easy, by combining the most common used command options from apt-get, apt-cache and apt-config.

Details in:
  • apt(8),
  • apt-get(8),
  • apt-cache(8) is the command that works with the cache
  • sources.list(5)
  • apt.conf: /etc/apt/apt.conf is the main configuration file shared by all the tools in the APT suite of tools, though it is by no means the only place options can be set.
  • apt-config accesses the main configuration file /etc/apt/apt.conf in a manner that is easy to use for scripted applications. It also allows to list the apt.conf file ('apt-config dump')
  • The APT Users guide in /usr/share/doc/apt-doc/,
  • apt_preferences(5),
  • the APT Howto.

Configuration of APT

  • '/etc/apt' contains the APT configuration folders and files.
  • 'apt-config' is the APT Configuration Query program, 'apt-config dump' shows the configuration.
More info: The most used apt/apt-get commands are
  • apt install packagename
  • apt-get install packagename
  • apt-get update // downloads the package lists from the repositories and "updates" them to get information on the newest versions of packages and their dependencies. It will do this for all repositories and PPAs, it will not install anything
  • apt-get upgrade // installs the newest versions of all packages currently installed on the system from the sources enumerated in /etc/apt/sources.list
  • apt-get dist-upgrade
Furthermore:
  • apt-file to find out where a file resides

Which packages are currently installed/available?

Installed

What's installed:
  • Simple: 'apt list --installed'
  • 'apt-cache pkgnames'
  • alternative: 'dpkg --list '*' > dpkglist.txt' and then 'less dpkglist.txt'
  • via GUI 'gnome-app-install'
  • legacy: Muon Package Manager, Synaptic
What's available:
  • To list all available packages: 'apt list --all-versions'
  • To list the upgradable packages: 'apt list --upgradeable'

Package information

Use
  • apt show packagename
  • dpkg -L packagename
  • dpkg -l packagename
  • dpkg --contents packagename

It can be observed you install a package (.deb file) with a command such as qemu-utils, but the contents is a list of files of which none is called qemu-utils.

Available

What is available in the wild can be found e.g. at Debian or http://packages.ubuntu.com.

What is available for installation on your platform is function of the repositories you connect to (your /etc/apt/sources.list). Simplest way seems to be via utils as above.

Debian releases

See

What is available? Then you need to know which Debian release you want. This is a moving target, e.g. in June 2021:
  • The next release of Debian is codenamed "bullseye" — "testing", no release date has been set
  • Debian 10 ("buster") — current "stable" release
  • Debian 9 ("stretch") — "oldstable" release, under LTS support
  • Debian 8 ("jessie") — "oldoldstable" release, under extended LTS support
  • Debian 7 ("wheezy") — obsolete stable release
  • Debian 6.0 ("squeeze") — obsolete stable release
  • Debian GNU/Linux 5.0 ("lenny") — obsolete stable release
  • Debian GNU/Linux 4.0 ("etch") — obsolete stable release
  • Debian GNU/Linux 3.1 ("sarge") — obsolete stable release
  • Debian GNU/Linux 3.0 ("woody") — obsolete stable release
  • Debian GNU/Linux 2.2 ("potato") — obsolete stable release
  • Debian GNU/Linux 2.1 ("slink") — obsolete stable release
  • Debian GNU/Linux 2.0 ("hamm") — obsolete stable release

How to install/remove a package?

You can:
  • apt install
  • apt-get install
Usage modes of apt and apt-get include:
  • 'update' is used to resynchronize the package index files from their sources. The lists of available packages are fetched from the location(s) specified in /etc/apt/sources.list. For example, when using a Debian archive, this command retrieves and scans the Packages.gz files, so that information about new and updated packages is available.
  • 'upgrade' is used to install the newest versions of all packages currently installed on the system from the sources enumerated in /etc/apt/sources.list. Packages currently installed with new versions available are retrieved and upgraded; under no circumstances are currently installed packages removed, or packages not already installed retrieved and installed. New versions of currently installed packages that cannot be upgraded without changing the install status of another package will be left at their current version.
  • 'full-upgrade' (apt) and 'dist-upgrade' (apt-get), in addition to performing the function of upgrade, also intelligently handles changing dependencies with new versions of packages; apt and apt-get have a "smart" conflict resolution system, and will attempt to upgrade the most important packages at the expense of less important ones if necessary.

Install

File '/etc/apt/sources.list' is used to locate the desired packages and retrieve them, and also obtain information about available (but uninstalled) packages. Use:
  • 'apt-get install <packagename>'. This takes care of prerequisite packages too, using the sources.list
  • 'apt-get install localpathtodebfile'. This takes care of prerequisite packages too, using the sources.list
  • GUI: Muon or synaptic

Remove

Do 'apt-get --purge remove foo'. E.g. sudo apt-get --purge remove globalprotect.

How to list installed files of a package

Use 'dpkg -L ', then use e.g. the man pages of what got installed for more info.

How to update/upgrade Debian

Approach 1: use the gnome 'software' application.

Approach 2: use apt - upgrade etc...

pkg-conf and pkg-config

pkg-config

See https://en.wikipedia.org/wiki/Pkg-config

pkg-config is a computer program that defines and supports a unified interface for querying installed libraries for the purpose of compiling software that depends on them. It allows programmers and installation scripts to work without explicit knowledge of detailed library path information.

When a library is installed (automatically through the use of an RPM, deb, or other binary packaging system or by compiling from the source), a .pc file should be included and placed into a directory with other .pc files (the exact directory is dependent upon the system and outlined in the pkg-config man page). This file has several entries.

These entries typically contain a list of dependent libraries that programs using the package also need to compile. Entries also typically include the location of header files, version information and a description.

pkg-conf

See http://pkgconf.org/, helps to configure compiler and linker flags for development frameworks. It is similar to pkg-config from freedesktop.org, providing additional functionality while also maintaining compatibility.

Red Hat, RHEL, YUM, DNF

Yum, the 'Yellowdog Updater, Modified' is a package manager for .rpm-based distributions. DNF or Dandified YUM is a successor.

AppImage

Allows the installation of binary software independently of specific Linux distributions.

Before we run the AppImage, we have to make it executable, e.g. : $ chmod +x PyCharm_Community_Edition.2022.2.2-x86_64.AppImage

./audacity-linux-3.4.2-x64.AppImage 
/lib/x86_64-linux-gnu/libatk-1.0.so.0
/lib/x86_64-linux-gnu/libatk-bridge-2.0.so.0
/lib/x86_64-linux-gnu/libcairo-gobject.so.2
/lib/x86_64-linux-gnu/libcairo.so.2
/lib/x86_64-linux-gnu/libgio-2.0.so.0
/lib/x86_64-linux-gnu/libglib-2.0.so.0
/lib/x86_64-linux-gnu/libgmodule-2.0.so.0
/lib/x86_64-linux-gnu/libgobject-2.0.so.0
/lib/x86_64-linux-gnu/libgthread-2.0.so.0
/lib/x86_64-linux-gnu/libjack.so.0
/lib/x86_64-linux-gnu/libpixman-1.so.0
findlib: libportaudio.so: cannot open shared object file: No such file or directory
/home/marc/Downloads/Audacity/audacity-linux-3.4.2-x64.AppImage: Using fallback for library 'libportaudio.so'
/tmp/.mount_audaci7QBgFi/bin/audacity: /tmp/.mount_audaci7QBgFi/lib/libselinux.so.1: no version information available (required by /lib/x86_64-linux-gnu/libgio-2.0.so.0)
But Audacity starts ok.

Help and documentation

man/Xman

'man' information is stored inside files, residing in /usr/man.
Examples include man1, ... , mann and X.
As the Red Hat tips suggest to run /usr/sbin/makewhatis /usr/man /usr/X11R6/man to create 'the database' (whatever that is), did so. As a result, you can do 'man -k xyz'. This will inform you whether man information is available inside the various man sections like man1, man2, ... . For example 'man -k password' comes up with more than 10 suggestions. Changing a user's password ON A NETWARE SERVER is done by 'nwpasswd'. (And how on a simple Linux? Well, by using the Control Panel/user and group management.) Xman, with his 'search' function is also nice to use.

whatis and id

'whatis' another kind of basic help system. 'id' is useful as it displays who you are.

file

Try e.g. 'file /etc/resolv.conf'. This will tell you it's an ASCII text file.

info / xinfo

The 'info' system is the old non-graphical hypertext documention tool. Try "xinfo" now.

find / locate

Try e.g. "find / -name xyz". This starts the search from / for any file called xyz. Try also "locate". This requires you build an index via 'updatedb'. This is supposed to run automatically via crontab.

less command

The 'less' command shows file contents, also for really large files. Use arrows-up/down to scroll. Navigation:
  • Press “f” to scroll to the next screen
  • Press “b” to scroll to the previous screen
  • Go forward and back one line at a time by using “j” and “k”
  • Search for patterns by starting with “/” (just like vi)
  • Go to a specific line number by typing the line number, followed by “g”
  • Press “q” to quit
Parameters: /pattern Search forward in the file for the N-th line containing the pattern. N defaults to 1. The pattern is a regular expression, as recognized by the regular expression library supplied by your system.

The Linux Documentation Project

Found in /usr/doc/html.

Basic sysadmin and security

Gnome desktop

Basics

BlackTiger runs Gnome 3.30.2. (see 'settings'). Useful: 'gnome tweaks'. Right-mouse: use two fingers. Copy a path in Nautilus: cntl-L, then cntl-C, then cntl-V Key tools: dconf-editor and gsettings.

Desktop icon

How do I create a dektop shortcut? Open dconf-editor and navigate to org.gnome.desktop.background and check that "show-desktop-icons" is enabled then open the file manager (nautilus) go to the application you want, right-click on it and select "Copy to" and then select ~/Desktop. Does not work.

GDM

The Gnome Desktop Manager (GDM) is responsible for managing displays on the system. This includes authenticating users, starting the user session, and terminating the user session. It interacts with Fast User Switching App (FUSA) and gnome-screensaver (which I installed manually on BlackTiger).

Regardless of the display type, GDM will do the following when it manages the display. It will start an Xserver process, then run the Init script as the root user, and start the greeter program on the display. The greeter program is run as the unprivileged "gdm" user/group. The authentication process is driven by Pluggable Authentication Modules (PAM). The PAM modules determine what prompts (if any) are shown to the user to authenticate.
  • GDM
    • apt list --installed | grep gdm
    • GDM wiki
      • To see the status of gdm: '$ systemctl status gdm' (might say e.g. loaded (inactive))
How to configure a lockscreen? A screensaver?

Gnome autostart

Autostart is a desktop feature. In Gnome you find /etc/xdg/autostart .

Gnome terminal

GrayTiger runs Gnome terminal for the terminal program. You can manually start it as 'gnome-terminal'. Configuration is via right-mousebutton/preferences. Particular: copy/paste needs shift-cntl-C/shift-cntl-P (note the additional 'shift').

Virtual consoles

These are defined in '/etc/inittab' by statements such as:
6:2345:respawn:/sbin/mingetty tty6'

Now replace the minimal getty (mingetty) by a better one. The Linux readme within the getty_ps documentation ('/usr/doc/...') explains that getty is for consoles, uugetty is for modem links.

Hence it's getty I want for testing that I can change the getty of a console. OK, done and logged into Kassandra.log . What are the options? 'Man getty' explains.

Getting back that color ls

On Kassandra:
Try 'man color-ls' and 'man dircolors'. The .bash-profile needs to be updated with 'eval 'dircolors' and an alias for 'ls=color-ls --color=yes'.

Root/sudo/su

Bullseye

Came configured with root and marc as userids. Password for marc was changed via Debian's admin GUI. Password for root was changed by 'su', then 'passwd'.

Legacy

Root is disabled by default. First account created i.e. marcsel has administrator rights, can do 'sudo'. Precede any command you would need to execute as root by sudo.

Ubuntu rootsudo

https://help.ubuntu.com/community/RootSudo To have administrator access, one must use one of two special commands, either 'sudo' or 'gksudo'. To use either command, your login must be registered in the sudoers file. This file is so called because it lists all users who can use the sudo command. To add a user to the sudoers file, the system administrator (the person with the login that was registered during the installation of Ubuntu) must login, and add the user with the "Users and Groups" administration utility. To access that program, select System ⇒ Administration ⇒ Users and Groups from the top toolbar. [Note: the system administrator cannot launch this application from within someone else's login session] To use a graphical tool such as Dolphin or Kate, do a "gksu dolphin" from the command line. You can simulate a root login with 'sudo -i'. If you really want you can enable root with 'sudo passwd root'. Or you can just do a 'su' without specifying anything, and then provide the root password.

Adding new users and changing permissions

Changing permissions

To make a script executable: chmod +x

Changing permissions: https://help.ubuntu.com/community/FilePermissions
  • chmod
  • chown - e.g. chown marc4 /home/marc4/Documents/Mac2009/Documents

Ubuntu Angkor

Root is created but only accessible for login if you force a boot in safe mode. You can open a terminal in Dolphin. User marc was created by PC Tronics but erroneously disabled by me by renaming his homedirectory. Naming it back did not help. So I created marc3:
  • useradd -d /home/marc3 -m marc3 #this creates homedir and userid
  • passwd marc3 #this sets the psw
  • then I manually edited /etc/group to give marc3 same groups as marc (marc adm dialout cdrom plugdev lpadmin admin sambashare)

Generating a new password - Red Hat

You can use mkpasswd to generate passwords. And to force them on a user. However, 'mkpasswd -l 6 patti' fails, stating there is no /etc/passwd file. Does Red Hat use some kind of shadow password file? No, since a 'less /etc/passwd' reveals the contents and all the userids. Patti has been created, apparently without a password. Still, she can't login, and only get the message 'login incorrect'.

So what, Red Hat? ===> Use GUI (control panel) for user management, and you're OK.

Mounting devices

Your kernel needs to support the device type you want to mount. Good place to find out is via the systemlog viewer (e.g. KSystemLog) or in /var/log/messages. To access a device you need to be able to 'see' the device, and then you need to specify a mount point that applications can reach.

The commands:
  • fdisk -l will show what disks are found
  • 'mount' will show you what is currently mounted
  • 'mount -t iso9660 /dev/... /mnt' will mount the /dev/... as /mnt as an iso9660 types
  • 'umount /mnt' will remove the mountpoint
  • 'lsof' will list open files
  • 'fuser' will list users of files
  • 'fuser -km /home' kills all processes accessing the file system /home in any way.

Mounting CD

Question : Is kernel supporting this?
Answer : Yes, e.g. on Toshiba laptop: look in /var/log/messages : kernel : hdc: TOSHIBA CDROM XM1402B ATAPI CDROM Drive.

Question : How to mount?
Answer : look in /usr/doc/howto/cdrom : mount -t iso9660 -r /dev/cdrom /mnt
WRONG - you have to replace /dev/cdrom by /dev/hdc. Then it works. So : mount -t iso9660 -r /dev/hdc /mnt Do a cd /mnt, and you'll see the CD.

Question : How to unmount?
Answer : umount /mnt
 

Question : What if you get the message /dev/hdx device is busy?
Answer : that means a process is still accessing the CD. If you're working under X, your previous non-X terminal might still hold the CD. Try fuser -v /mnt... to identify the holder of the lock.
 

Mounting a floppy

First, do a 'mkdir /floppy'. Then 'mount -t msdos /dev/fd0 /floppy'.

Mounting a USB device

USB is a bus, with a single host, controlling all connected devices. Devices can't directly talk to one-another. Ways to find info:

  • usbmgr
  • lsusb
  • insmod
  • cat /proc/scsi/scsi
  • cat /proc/bus/usb/devices
  • cat /proc/pci
  • lspci -v
  • "cd /proc/bus/usb", "ls -l"
  • for formatting use 'gparted'
Removable harddisks are mostly simulating SCSCI, so prereq is to have SCSI in the kernel (ref linux-usb.sourceforge.net). Device can be formatted as FAT32, NTFS, etc.

Find info via
  • ksyslog viewer
  • /var/log/messages - if device is recognized *** try 'dmesg | grep SCSI
  • 'cat /proc/scsi/scsi' - the USB MSS is accessed via emulated SCSI
  • 'sudo fdisk -l' for disks

First create mountpoint directory e.g. /usbntfs (since formatted as ntfs). Mount with mount -t usbntfs /dev/sda5 /usbntfs and you're in.

Used USB's include:
  • /media/EEMA-USB-STICK is the EEMA usb stick, is formatted as ntfs, mounted as '/dev/sdi1 on /media/EEMA-USB-STICK type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096)' *** apparently the name /dev/sdi1 is variable ...
  • Samsung Story 1 TB devices, Samson One and Samson Two, mounted as '/dev/sdh1 on /media/Samson One type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096)'
  • mind the subtle difference in device name: sdi versus sdh
  • /media/WHISKY is the IOMega Whiskybottle *** connect with physical connector that is most close to disk, not with the extender cord
Question: how does this automounting with eg Dolphin work? /etc/fstab, /etc/mtab ...stuff.

Creation and removal of directory

Command rmdir will only remove empty dirs. Otherwise use rm -r dir. You will be prompted to confirm. To avoid the prompt, use rm -rf dir.

Packaging a directory with tar, gzip and pgp

Basics

  • To pack: "tar -c Kassandra_Control > KasCntl.tar"
  • To check contents: "tar -t KasCntl.tar" (or via KDE)
  • To unpack: "tar -xvf KasCntl.tar"

VERSION 1.a TAR

  • To pack: "tar -c *.jpg > ama.tar"
  • To unpack: "tar -xvf ama.tar"

VERSION 1.b TAR and GZIP in two steps

  • To pack: "tar -c *.jpg > ama.tar"
  • To gzip: "gzip ama.tar" (which results in ama.tar.gz)
  • To unzip: "gunzip ama.tar.gz"
  • To unpack: "tar -xvf ama.tar"

VERSION 1.c TAR, GZIP and PGPE -C (Conventional, i.e. symmetrical)

  • To pack: "tar -c *.jpg > ama.tar"
  • To gzip: "gzip ama.tar" (which results in ama.tar.gz)
  • To pgp: "pgpe -c ama.tar.gz" (-c stands for conventional, hence IDEA)
  • ---alternatively: "pgpe -c ama.tar.gz -o ama.ref" (-o stands for output; if you want to wipe, use the -w flag)
  • NOW HAVE YOU REMEMBERED THAT PASSPHRASE?
  • To unpgp: "pgpv ma.tar.gz.pgp"
  • ---alternatively: "pgpv ama.ref -o ama.tar.gz"
  • To unzip: "gunzip ama.tar.gz"
  • To unpack: "tar -xvf ama.tar"

VERSION 2.a TARZIP (tar including compress)

  • To pack: "tar cvfz /temp/fea.tgz foo/fea" (note that foo/fea refers to the entire directory)
  • To unpack: "tar xvfz fea.tar" (note that this will recreate everything at the curent location)

VERSION 2.b TARZIP and PGPE -R (Asymmetrical)

  • To pack: "tar cvfz /temp/fea.tgz foo/fea" (note that foo/fea refers to the entire directory)
  • To encrypt: "pgpe -r marc.sel@be.pwcglobal.com -o /temp/fea.tgz.pgp /temp/fea.tgz" (-r specifies a recipient, i.e. public key crypto) --- use the -w flag for wiping ---
  • To decrypt: "pgpv /temp/fea.tgz.pgp (you'll be challenged for the passphrase)"
  • To unpack: "tar xvfz fea.tar" (note that this will recreate everything at the curent location - leading to e.g. /Malekh/Kassandra_Data/...)

VERSION 3 TARZIP and PGP -C (Conventional, i.e. symmetrical)

  • To pack: "tar cvfz temp/fea.tgz foo/fea" (note that foo/fea refers to the entire directory)
  • To pgp: "pgpe -c ama.tar.gz -o ama.ref" (-o stands for output; if you want to wipe, use the -w flag)
  • ---
  • To un-pgp "pgpv ama.ref -o ama.tar.gz"
  • To unpack: "tar xvfz fea.tar" (note that this will recreate everything at the curent location)

VERSION 4 TARZIP and Geheimnis/gpg - SuSE 7.2

  • To pack: "tar cvfz /temp/fea.tgz foo/fea" (note that foo/fea refers to the entire directory)
  • To pgp: "geheimnis" -first create keypair, it seems existing pgp-ring are hard/impossible to reuse, tthntc...
  • To wipe: "shred" with KDE
  • ---
  • To un-pgp "geheimnis"
  • To unpack: "tar xvfz fea.tgz" (note that this will recreate everything at the curent location)

Setting the time

Red Hat

RH: via the 'time machine' on the control panel.

SuSE

Use the date command. To set the time to March 5, 2002, 21h00: "date 03052100".
 

Process/service starting, stopping, monitoring - systemd and systemctl

Finding out process information:
  • use 'htop', 'top', 'pstree', 'systemctl' to see all processes
  • use 'ps' to enumerate your own processes
  • use 'ps -e' or 'ps -e | less' to enumerate all processes
  • For details $ cat//status , or $ pwdx
  • To list the executable of a process: $ ls -l /proc//exe,
  • $ ps uww 1234 shows various pieces of information
Searching a specific process:
  • 'ps -e | grep xf' to find alll processes whose names start with xf
  • use 'ps aux | grep vsftpd' to see if there is a process indeed.
Listing files opened by a process:
  • $ lsof -p1234
  • $ ls -l /proc//fd
Killing a process:
  • $ sudo kill
  • $ sudo pkill processname
  • $ sudo killall (but sends only SIGTERM to all the processes)
  • $ sudo kill -9
You may use to kill the process. killall sends only SIGTERM to all the processes.

Systemd overview

systemd was started in 2010 and provides replacements for daemons (such as init) and utilities, including the startup shell scripts, pm-utils, inetd, acpid, syslog, watchdog, cron and atd. Debian uses it since Jessie.

systemd is a software suite, consisting of:
  • systemd itself is a system and service manager, composed of many binaries.
  • systemctl is a command to introspect and control the state of the systemd system and service manager. Not to be confused with sysctl (a software utility of some Unix-like operating systems that reads and modifies the attributes of the system kernel such as its version number, maximum limits, and security settings)
  • systemd-analyze may be used to determine system boot-up performance statistics and retrieve other state and tracing information from the system and service manager.

systemd configuration

Configuration is based on units (any resource that the system knows how to operate on and manage), configured in unit files. The type of the unit is recognized by the file name suffix, .mount in case of a mount point. Unit files provided by Debian are located in the /lib/systemd/system directory. If an identically named local unit file exists in the directory /etc/systemd/system, it will take precedence and systemd will ignore the file in the /lib/systemd/system directory. Some units are created by systemd without a unit file existing in the file system.

System administrators should put new or heavily-customized unit files in /etc/systemd/system.

Unit-file types include:
  • .service
  • .socket
  • .device (automatically initiated by systemd)
  • .mount
  • .automount
  • .swap
  • .target
  • .path
  • .timer (which can be used as a cron-like job scheduler)
  • .snapshot
  • .slice (used to group and manage processes and resources)
  • .scope (used to group worker processes, isn't intended to be configured via unit files)
man systemd.unit explains the hierarchy of the configuration files. Utility journal

Using systemctl

systemctl is a command to introspect and control the state of the systemd system and service manager.

Not to be confused with sysctl, a utility that reads and modifies the attributes of the system kernel such as its version number, maximum limits, and security settings.It is available both as a system call for compiled programs, and an administrator command for interactive use and scripting. Linux additionally exposes sysctl as a virtual file system.
  • Debian systemctl manpage (buster)
  • 'systemctl' without arguments displays a list of all loaded systemd units (units: any resource that the system knows how to operate on and manage, configured in unit files)
  • 'systemctl list-unit-files' shows all installed unit files
  • 'systemctl status' displays the overall status (states: )
  • 'systemctl list-units --type=service' displays a list of all loaded services (services: ...)
  • 'systemctl list-units --type=service --state=active' displays a list of all loaded and active services, this includes running and exited services
  • 'systemctl list-units --type=service --state=running' displays a list of all services that are loaded, active and running
  • Using systemctl to enable/disable a service when the server boots (enabling does NOT start the service):
    • systemctl enable sshd
    • systemctl disable sshd
  • Using systemctl to start or stop a service:
    • systemctl status sshd
    • systemctl restart sshd
    • systemctl start sshd
    • systemctl stop sshd
    • systemctl kill sshd

Legacy

  • Old-school: System V initscripts in '/etc/init.d', you start via 'sudo /etc/init.d/apache2 start'
  • New-school: 'upstart'-jobs in '/etc/init', you
    • ask for status via 'status servicename' e.g. 'status cups'
    • start via 'sudo service servicename start' e.g. 'sudo service mysql start' (similar for stopping)
Also:
  • use 'htop', 'top' or 'pstree' to see all processes
  • use 'ps' to enumerate your own processes
  • use 'ps -e' to enumerate all processes, 'ps -e | grep xf' to find alll processes whose names start with xf
  • use 'ps aux | grep vsftpd' to see if there is a process indeed.
  • use 'kill...'

Logging

Logging comes in two types:

(1) from executing processes, calling the log function, whose calls are served by a logging daemon such as klog and syslogd (the daemon then writes the entries into the logfile). Typical logfiles include /usr/adm/lastlog (each user's most recent login time), /etc/utmp (a record per login) and /usr/adm/wtmp (a record per login/logout). You can use last to view such a file.

(2) from the accounting, started via the accton command, the /usr/adm/acct contains a log of every command run by the users.

>The syslog facility allows any program to generate a log message by writing to /dev/log, /dev/klog and 514/udp. Grouping of the sources generating the log entries is done in syslog's facilities such as kern, user, mail, lpr, auth, daemons, ... .

In addition to facilities, there are priorities as well: emerg, alert, crit, err, warning, ... .

Incoming log entries are parsed against a table in /etc/syslog.conf, defining for each facility & priority where to forward or log the message.

An example: *.err;kern.debug;auth.notice /dev/console auth.* root

On previous Slackware Linux, standard logfiles include:
- /var/adm/syslog, messages (bootmessages), lastlog, utmp (binary logfile about current users), wtmp (binary logfile about login/logout)
- /etc/utmp (binary logfile about current users)

Under RedHat, have a look in /var/log. I modified /etc/syslog.conf to log everything into /var/log/syslog.kassandra. For this purpose, I saved the original syslog.conf into .original, and I did 'touch /var/log/syslog.kassandra'. I then stopped/restarted syslogging through the control panel/runlevel manager.

'Tail /var/log/syslog.kassandra' tells me the restart worked out fine.

To make sure I log the absolute maximum and know where, I modified /etc/syslog.conf, now everything goes to /var/log/avina001.log ---- key line in /etc/syslog.conf--------------- # enable this, if you want to keep all messages # in one file *.* -/var/log/avina001.log ---- end of /etc/syslog.conf ------------------- Remember: the "dmesg" command is also useful to display kernel boot time messages.

Booting

Booting Debian with GRUB/GRUB2

GRUB2 is a bootloader and the one used by Debian. GRUB2 is typically installed during the installation of Debian itself. If properly installed, GRUB2 will be automatically detected by the computer's UEFI during boot (or BIOS for older computers). If multiple bootloaders exist on the same computer, such as with a dual boot machine, the user will need to enter their computer's UEFI or BIOS to set the priority order of the bootloaders present since the computer will execute only one bootloader after a successful power-on self-test (POST). (They may also need to turn off the Secure Boot option in UEFI to stop it from preventing GRUB2 from launching.)

GRUB2 is often referred to as simply GRUB. GRUB2 is a re-write of an earlier version of GRUB, still in use but mostly on older computers, and now called GRUB Legacy. GRUB2 will normally be what is wanted for a machine with UEFI. It can also work with older machines with BIOS but GRUB Legacy may be found on those.

See:
  • https://en.wikipedia.org/wiki/GNU_GRUB
  • https://www.gnu.org/software/grub/
Configurable items include:
  • /etc/default/grub
  • /boot/grub/grub.cfg (DO NOT EDIT THIS FILE, It is automatically generated by grub-mkconfig using templates from /etc/grub.d and settings from /etc/default/grub)

Booting Ubuntu (historical)

Basics are documented in http://www.debian.org/doc/debian-policy/#contents. On Angkor I installed "BUM" to manage what gets started at boottime.

Some more detailed information can also be found in the files in the /usr/share/doc/sysv-rc directory. Linux run levels are based on System V init:
  • 0 System Halt
  • 1 Single user
  • 2 Full multi-user mode (Default)
  • 3-5 Same as 2
  • 6 System Reboot
Each defined run level should have an rcX.d directory where X is the run level number. The contents of the rcX.d directory determines what happens at that run level.

Use 'runlevel' to find out current runlevel (typically 2). When changing runlevels, init looks in the directory /etc/rcn.d for the scripts it should execute, where n is the runlevel that is being changed to, or S for the boot-up scripts.

Booting SuSE (historical)

Use SystemV init GUI editor. On malekh, unfortunately, this utility has gone... Checked it out on boy, 'sysvinit-2... is another package. Apparently, the package gets installed by default by YaST, but this excludes the GUI I used on boy, ksysvinit. Check out www.kde.org: the package kdeadmin contains ksysvinit. I downloaded it into /Kassandra_Data/AdditionalRPM, but it is in .bz2 format, which gzip does not recognize. Alternative: get kdeadmin package from a CD. SuSE 6.1 only comes with a kdeadmin-1-1.dif file on CD1, this seems to be some kind of patch file, not the real thing. Now what, ksysvinit?

Default solution suggested by SuSE is

  • use YaST to automatically adjust entries in /etc/rc/config
  • manually adjust /etc/rc.config
  • use the rctab command (not really userfriendly)

Booting Red Hat (historical)

Source of information : RedHat's 'Boot-Process-Tips'. Linux now uses SysV-style initialization.

(1) Start kernel, LILO starts a kernel image (e.g. vmlinuz...)

(2) Start 'init' The kernel searches /etc, /sbin (and maybe some other places) for 'init', and runs the first one it finds.

(3) 'init' opens /etc/inittab By opening '/etc/inittab', 'init' finds out the sysinit script ('/etc/rc.d/rc.sysinit') and the runlevel ('id:3:initdefault' => runlevel 3 is default). I'm not sure whether the rc.sysinit script runs before the rest of the scripts is kicked off, but lets assume it is.

(4) the /etc/rc.d/rc.sysinit script executes Here, a lot of things happen, including starting rc.serial (if it exists).

On default RedHat, rc.serial does NOT seem to exist. However, under the Control Panel/Network Configurator, you can define and activate interfaces, including e.g. a ppp0 on /dev/cua0. So would it not be possible to define another ppp interface, on /dev/ttyS0? Whow would deal with the 'setserial' aspects? -*-

(5) the scripts for the desired runlevel are executed. The default runlevel (defined in 'id:3:initdefault') is 3, which (I assume) requires the running of all the scripts in the '/etc/rc.d/rc3.d' directory. In this directory, there are only links to scripts. The scripts are actually residing in '/etc/rc.d/init.d'. Now each of these scripts can be executed manually as well, e.g. '/etc/rc.d/init.d/httpd.init stop[or start] '

Link with the control panel/runlevel editor? Well, if you add/remove a script from a runlevel, this is automatically reflected in the links in the /etc/rc.d/rc3.d directory.

Shutdown

Use 'shutdown now', 'shutdown -h now' (halt), 'shutdown -r now' (reboot).

Video hardware and display

Legacy on Angkor

GigaByte Angkor comes with Nvidia 'GT200- Geforce G 210'. You can get details via 'lspci -vv'. After upgrade to Lucid Lynx v10.4, lots of problems with installing the nvidia driver. Apparently this is a kernel module. Problems you have to solve:
  • need to download the driver file itself
  • need to remove the nouveau driver which is installed by default (otherwise nvidia install failes immediately)
  • need to install the kernel header files
  • during install of nvidia driver, the kernel module is actually compiled against the kernel header files

Finally got it working with instructions from help.ubuntu.com/community/NvidiaManual....

Website: 'http://www.nvidia.com/object/product_geforce_210_us.html'. This reads QUOTE: Installation instructions: Once you have downloaded the driver, change to the directory containing the driver package and install the driver by running, as root, "sh ./NVIDIA-Linux-x86-190.53-pkg2.run". You may need to cd to "/marc4/downloads". UNQUOTE

One of the last installation steps will offer to update your X configuration file. Either accept that offer, edit your X configuration file manually so that the NVIDIA X driver will be used, or run nvidia-xconfig.

What gets installed

  • documentation in /usr/share/doc/NVIDIA_GLX-1.0
  • executables of utilities in /usr/bin/... such as nvidia-detector, nvidia-installer, nvidia-settings, nvidia-uninstall, nvidia-xconfig...
  • for a full list of what gets installed, check the documentation
  • also, "syslog" shows a lot of nvidia results and extensions.
If your X configuration disappears for a userid e.g. marc3, then boot in recovery mode, do a login marc3, and then a "sudo /usr/bin/nvidia-xconfig". This writes a new xconfig.

NVIDIA troubleshooting

When you get strange behaviour in X, you can boot in recovery mode, login as root, and then execute "/usr/bin/nvidia-installer --update". This will download latest driver from www.nvidia.com. You get guided through n-curses-based installer, that rebuilds kernel modules and re-configures. Legacy: VGA compatible controller: nVidia Corporation GT200 [GeForce 210] (rev a2) Subsystem: XFX Pine Group Inc. Device 2941 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Kernel modules: nvidiafb Ref above: kernelmod is nvidiafb. Also: ''xrandr' shows all possible resolutions.

What does this mean 'kernelmod nvidiafb'? Executing 'lsmod' does not list this module. Modules are found in '/lib/modules': you can find '/modules/2.6.31-14-generic/kernel/drivers/video/nvidia/nvidiafb.ko'. What's that? Use systemlogview to peek inside X.org log. This shows a device section with driver "nv". Furtherdown it's specified what "nv" supports... a long list but not the GEFORCE G 210. And a little bit further down you see that X probes and does indeed find a GEFORCE G210.

So the mediocre quality is probably due to using just the "standard" driver "nv". What would be better?

Libraries: libc, libXm, ..

Introduction: format of executables

Linux supports 2 formats for executables: - a.out (assembler output) and - ELF (Executable and Linking Format). The a.out is discontinued. ELF is standard now. Full details in "The Linux Kernel Book".

Libraries

Libraries are essentially a means of gathering several object fields together. This can be done in two ways:
  • static: during link edit, the library code is integrated with the executable code
  • dynamic (or 'shared') the library code is loaded at execution time. Such dynamic code is also loaded only once, and then shared by all applications. This loading is done by ld.so, the runtime loader. Linux runs ldconfig at boottime to create links to the most recent shared libraries.
Note that the name of dynamic libraries is conform to: libNAME.so.MAJOR.MINOR.

These libraries are defined as:
  1. /lib (a so-called trusted library)
  2. /usr/lib (=)
  3. Libraries specified in /etc/ld.so.config

Tools

For shared libraries:
  • ldconfig: configures the dynamic linker run-time bindings - executes automatically at boot-time
  • sudo ldconfig -p: prints all shared objects (i.e. shared libraries) known to the linker
  • ldd: lists what shared libraries an executable needs, e.g. ldd /usr/x386/bin/xv // returns a list of libraries
Kassandra RH 4.0: the libc package contains the basic shared libraries that are necessary for Linux to function. RH 4.0 came with libc 5.3.12-8 . Prior to ELF, Linux used a.out format. The library aout provides backward compatibility with this format.

Borsalino RH 5.0: for example: JDK library requirements: Before downloading the jdk, I checked my libs and found in glint: libc: 5.3.12-24 ld.so: 1.9.5-3 Xfree86: 3.3.1-14 Should be alright. Try ldconfig -D for obtaining an overview.

Suse53 libdb.so.1 problem Programs such as kpackage, man and xman suddenly started complaining they can't load libdb.so.1. Why not, how did I delete it(man used to work)? On Suse53-CD5 there is a /usr/lib/libdb.so.1.85.5, and a /usr/i486-linuxaout/libdb.so.1 (which is the older aout format I suppose...). Oddly enough, if I run a find on libdb.so.1, the file is locally found in /usr/i486-linuxaout/libdb.so.1 --- so why are they complaining? How are libs specified on my machine: in the three locations specified supra. Runnng ldconfig -D reveals a lot of info, including that apparently libdb.so1. get loaded ok from libdb.so.1.85.5 (the version found on Suse53 CD5). Now what?

Hard disk partitioning

With Ubuntu try 'gparted'.

History: Using fdisk 'print' option on Kassandra reveals:
Disk /dev/hda: 64 heads, 63 sectors, 786 cylinders
Units = cylinders of 4032 * 512
Device Boot Begin Start End Blocks ID System
/dev/hda1 * 5 5 385 768096 7 OS/2 HPFS
/dev/hda2 386 386 776 788256 83 Linux native
/dev/hda3 1 1 4 8032+ 12 Unknown
/dev/hda4 777 777 786 20160 5 Extended
/dev/hda5 777 777 782 12064+ 82 Linux swap
/dev/hda6 783 783 786 8032+ 4 DOS 16-bit <32M
 

Disk management

Commands:
  • 'gnome-disks'
  • 'df' or 'df --local --human-readable -T'
  • 'du' - disk usage - lists usage of every file - not very useful to get a 'big picture'

Keyboard

Use Alt-Gr key to access ~.

Red Hat 5.0

Red Hat 5.0 User Guide: use /usr/sbin/kbdconfig. Use e.g. "be-latin1". Note that this does not define your keyboard under X.

SuSE 5.3

Console: use YaST to configure an azerty keyboard. This definition goes into "/etc/rc.config".

Under X: use SaX to configure an international keyboard, with "Belgian" keys.   Use keymap to finetune if required.

Access MS-DOS floppies

Use the mtools (e.g. mdir, mcopy). Use 'mtoolstest' to check the configuration. Defaults in '/etc/mtools.conf'. Remember to use 'a:', not 'A:', eg 'mdir a:'.

rsync

Intro

Original source: rsync.samba.org. As per man rsync, there are 4 basic scenarios to use rsync:
  • via remote shell, pull
  • via remote shell, push which works for me, from Windows PC to Angkor2
  • via rsync daemon, pull
  • via rsync daemon, push
On Linux, rsync can be used as a client, or can be started as a daemon ("rsync --daemon"). On Windows, likewise, with "service" rather than daemon. Backing up Angkor2 to USB with rsync: "rsync -vvvrt /home/marc/Documents ""/media/SamsonTwo/201306/Backup Angkor2".

Using rsync from laptop to usbdisk

Mount e.g. "Samson One" so it's visible in Dolphin. Then do an rsync -vvrt /home/marc/Documents "/media/Samson One" This results in the contents of /home/marc/Documents be replicated into /media/Samson One.

Using rsync from laptop to usbstick

What's the usbstick called? Issue “mount”, results in: /dev/sdb1 on /media/KINGSTON type vfat (rw,nosuid,nodev,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush,uhelper=udisks) So the usbstick is formatted as vfat. VFAT is an extension of the FAT file system and was introduced with Windows 95. The command “rsync -vvvrt /home/marc/Documents /media/KINGSTON” creates /Documents on the usbstick and syncs the files. Options: -v verbose -r recursive (into directories) -t preserver modification times “--modify-window=2” is recommended if the target file system is different from ext2 or ext3, because the time management of VFAT/FAT32 etc less accurate than ext2 or 3.

Legacy

Legacy: try scenario 4, push from Windows to rsync daemon on Angkor2. Needs /etc/rsyncd.config and /etc/rsyncd.secrets to be created. Done, daemon starts, but authentication continues to fail. Tried various users and passwords.

Legacy: try scenario 2:
  1. on Windows, install cygwin from www.cygwin.com, including ssh-openssh and rsync
  2. on Linux, install ssh, sshd and rsync
  3. on Windows, execute keygen (remember passphrase on private keyfile), mail pubkey to Linux, store in /home/marc/.ssh/authorized_keys, try 'ssh marc@192.168.1.5'
  4. on Windows, open cygwin terminal, perform rsync -avvvr --rsh=/usr/bin/ssh /cygdrive/c/Users/selm/Desktop marc@192.168.1.5/home/marc
Comments:
  • the "cygdrive/c" is the way cygwin refers to C:\
  • since I don't run local DNS, you need to use the ip of Angkor2
  • on Windows, if you open a cygwin terminal, you find yourself at "/home/selm", which is really "C:\cygwin\home\selm".
  • on Windows, remember if you create a script in /home/selm, you need to call it as "/home/selm/scriptname" or bash won't find it since it's not on his path

freefilesync

An alternative to rsync. Seems mature, there's a free version.

Basic security

Certificate stores and key stores

Introduction

There are quite a few cert and keystores on a Linux machine.
  • OS-level - package manager - recall dpkg is the foundational tool, apt is a front-end to it, see package manager info
    • See https://www.debian.org/doc/manuals/aptitude/ch02s02s05.en.html
    • The list of keys that apt will trust is stored in the keyring file /etc/apt/trusted.gpg.
    • Once you have the GPG key, you can add it to this file by executing the command gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --import newkey.asc. aptitude will then trust any archive that is signed with the key contained in newkey.asc.
  • For TLS/SSL, managed by the command update-ca-certificates, interacts with OpenSSL
    • /etc/ssl/for certs, privkey, config
    • /etc/ssl/certs for certificates
    • Command update-ca-certificates updates the directory /etc/ssl/certs to hold SSL certificates and generates ca-certificates.crt, a concatenated single-file list of certificates. It reads the file /etc/ca-certificates.conf. Each line gives a pathname of a CA certificate under /usr/share/ca-certificates that should be trusted.
    • Furthermore all certificates with a .crt extension found below /usr/local/share/ca-certificates are also included as implicitly trusted.
    • Before terminating, update-ca-certificates invokes run-parts on /etc/ca-certificates/update.d and calls each hook with a list of certificates: those added are prefixed with a +, those removed are prefixed with a -.
  • Browsers such as Firefox and Chrome can use the OS certificate store through NSS, see https://www.baeldung.com/linux/add-self-signed-certificate-trusted-list
    • For Firefox: use the certutil command from NSS
    • For Chrome: through Chrome GUI import
  • Java keystore
  • GPG and GPA

Security - mini audit

Introduction

Start with the security HOWTO in /usr/doc/howto/en/html/Security-HOWTO.html. On the web, check-out the Linux Security homepage (url in /LinuxWeb1000ITLinux.html). How about LASG, PAM, /etc/security entries, hardening SuSE ...?

PAM

Check out man page. Linux uses either a single large /etc/pam.conf file, or a number of files in /etc/pam.d (if the latter is present, the former is ignored). SuSE 6.4 came with /etc/pam.d provided. Documentation is found in e.g. /usr/doc/packages/pam/text. Apparently the /etc/security entries also seem related to PAM in some way.

Mini audit

Use e.g.:
  • "netstat -an" should only list required servers ("nmap" is of course an alternative)
  • "find / -perm -4000 -type f" lists the privileged (SUID) files - there should be less than e.g. 100
  • "find / -perm -2 '!' -type | '!' -perm 1000" indicates the world writable files (spelling?)
  • and check security patches and your vendor's website

Lynis

Lynis

Lynis performs an in-depth local scan. Covers multiple distributions. Uses multiple benchmarks (CIS, NIST, NSA, OpenSCAP data, vendor guides and recommendations (e.g. Debian Gentoo, Red Hat)). But apparently this is all 'integrated' and you cannot choose.

Plug-ins are available e.g. for Debian that collect additional information (but do not perform tests).

Two versions:
  • Free (with some limitations)
  • 'Enterprise version' (paying, with more features, including for compliance testing)
Finding out about your installation:
  • apt show lynis (to find out whether its installed)
  • man lynis (manpage)
  • dpkg -L lynis (all files)
  • execute lynis to show the existing profiles:
    • 'sudo ./lynis show profiles' returns e.g. /etc/lynis/default.prf
  • execute lynis to show the current settings from the profiles:
    • 'sudo ./lynis show settings' returns a list of basic settings (e.g. verbose)
Usefull locations:
  • Executable sits in /usr/sbin/lynis/
  • Documentation sits in /usr/share/doc/lynis/
  • Plugins (e.g. Debian) are in /etc/plugins/lynis/
  • Reports go to /var/log/lynis.log and lynis-report.dat
  • The actual tests are specified in /usr/share/lynis/include/... (scripts)
Usage on BlackTiger:
  • cd /usr/sbin/
  • sudo ./lynis show options
  • sudo ./lynis audit system --reverse-colors (for better visibility)
  • sudo ./lynis audit system --verbose
    • Output goes to the terminal (and can be copy/pasted there)
    • Reports in /var/log/lynis.log and lynis-report.dat
    • Show details of a test by 'lynis show details TEST-ID'
Lynis Enterprise allows to select a 'policy', e.g.
  • Best Practices: Docker
  • Best Practices for System Administration
  • NIST SP 800-171 Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations (US-flavor)
  • NIST SP 800-53 Security and Privacy Controls for Information Systems and Organizations (US-flavor e.g. PIV use)

OpenSCAP

Introduction

See also Security Content Automation Protocol (SCAP) is a synthesis of interoperable specifications derived from community ideas. Uses content structured in the eXtensible Configuration Checklist Description Format (XCCDF) for automation. OVAL programs drive evaluation. In practice:

Terminology

The term SCAP Security Guide is an umbrella term to refer to a security policy written in a form of SCAP documents. You can install a SCAP Security Guide on Debian 10 and newer using apt: 'apt install ssg-base ssg-debderived ssg-debian ssg-nondebian ssg-applications'.

After installing, the SCAP Security Guide policies are in the SCAP SSG directory. There are files for every platform available in a form of XCCDF, OVAL or datastream documents (datastreams combine XCCDF and OVAL). In most of use cases, you want to use the datastreams, which file names end with -ds.xml.

You can use a SCAP Security Guide:
  • in the CLI OpenSCAP scanner, oscap, its purpose is to scan the local machine
    • Concrete security policy is selected by choosing a Profile, a XCCDF profile decides which rules are selected and which values they use
    • You can display all available Profiles using the info subcommand: oscap info /usr/share/xml/scap/ssg/content/ssg-rhel6-ds.xml
  • using the scap-workbench GUI
Customisation: ref EBSI work. Illustration: on debbybuster, in the ssg directory, running oscap info ssg-debian8-xccdf.xml shows Profiles (standard, anssi_...) and Referenced check files.
  • Profiles:
    • Title: Standard System Security Profile for Debian 8 - Id: standard
    • Documented on the French ANSSI website: one profile document (PDF), with 4 levels defined:
      • Title: Profile for ANSSI DAT-NT28 Restrictive Level - Id: anssi_np_nt28_restrictive
      • Title: Profile for ANSSI DAT-NT28 Minimal Level - Id: anssi_np_nt28_minimal
      • Title: Profile for ANSSI DAT-NT28 High (Enforced) Level - Id: anssi_np_nt28_high
      • Title: Profile for ANSSI DAT-NT28 Average (Intermediate) Level - Id: anssi_np_nt28_average
  • Referenced check files:
    • ssg-debian8-oval.xml - system: http://oval.mitre.org/XMLSchema/oval-definitions-5
    • ssg-debian8-ocil.xml - system: http://scap.nist.gov/schema/ocil/2
Starting scap-workbench allows to select the XCCDF file ssg-debian8-xccdf.xml. Once opened, one of the Profiles can be selected. Customisation is empty, the GUI allows to select an XCCDF tailoring file. Such a file is expected to have an element inside that specifies the 'tailoring'. How has this been done for EBSI?

Open-SCAP for Debian

Debian installation: on Debian (Sid), you can use:
  • apt install ssg-debian # for Debian guides
  • apt install ssg-debderived # for Debian-based distributions (e.g. Ubuntu) guides
  • apt install ssg-nondebian # for other distributions guides (RHEL, Fedora, etc.)
  • apt install ssg-applications # for application-oriented guides (Firefox, JBoss, etc.)
Debian 10 SSG - 'the SCAP content is available in the scap-security-guide package which is developed at https://www.open-scap.org/security-policies/scap-security-guide.'
  • Open-SCAP Debian 10 SSG
    • Standard System Security Profile for Debian 10
      • where to get it?
      • Execution: 'oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_standard /usr/share/xml/scap/ssg/content/ssg-debian10-ds.xml'
    • ANSSI DAT NT28 Minimal Profile for Debian 10
Sample evaluation with specification of datastream (xml), xccdf_id etc. : oscap xccdf eval --datastream-id scap_org.open-scap_datastream_from_xccdf_ssg-rhel-osp7-xccdf-1.2.xml --xccdf-id scap_org.open-scap_cref_ssg-rhel-osp7-xccdf-1.2.xml --oval-results --results /tmp/xccdf-results.xml --results-arf /tmp/arf.xml --report /tmp/report.html /usr/share/xml/scap/ssg/content/ssg-rhel-osp7-ds.xml

Local installation on Debian

Finding out about your installation:
  • apt-get install libopenscap8
  • apt show libopenscap8 (to find out whether its installed)
  • dpkg -L libopenscap8 (all files)
The term 'SCAP Security Guide' is an umbrella term to refer to a security policy written in a form of SCAP documents. You can install 'SCAP Security Guide' on Debian 10 and newer using apt: 'apt install ssg-base ssg-debderived ssg-debian ssg-nondebian ssg-applications'.

After installing, the SCAP Security Guide security policies are in the SCAP SSG directory. There are files for every platform available in a form of XCCDF, OVAL or datastream documents (datastreams combine XCCDF and OVAL). In most of use cases, you want to use the datastreams, which file names end with -ds.xml.

You can use the SCAP Security Guide:
  • in the CLI OpenSCAP scanner (oscap), its purpose is to scan the local machine
    • Concrete security policy is selected by choosing a profile:
    • You can display all available profiles using the info command upon the datastream like in this example: oscap info /usr/share/xml/scap/ssg/content/ssg-rhel6-ds.xml
  • using the scap-workbench

Installation log

Installation log:

Execute 'sudo apt install ssg-base ssg-debderived ssg-debian ssg-nondebian ssg-applications'

Reading package lists... Done - Building dependency tree - Reading state information... Done

The following packages were automatically installed and are no longer required: linux-headers-4.19.0-16-amd64 linux-headers-4.19.0-16-common linux-image-4.19.0-16-amd64

Use 'sudo apt autoremove' to remove them.

Suggested packages: ansible puppet

The following NEW packages will be installed: ssg-applications ssg-base ssg-debderived ssg-debian ssg-nondebian

0 upgraded, 5 newly installed, 0 to remove and 14 not upgraded.

Need to get 5,735 kB of archives.

After this operation, 287 MB of additional disk space will be used.

Get:1 http://deb.debian.org/debian buster/main amd64 ssg-base all 0.1.39-2 [22.6 kB]

Get:2 http://deb.debian.org/debian buster/main amd64 ssg-applications all 0.1.39-2 [135 kB]

Get:3 http://deb.debian.org/debian buster/main amd64 ssg-debderived all 0.1.39-2 [167 kB]

Get:4 http://deb.debian.org/debian buster/main amd64 ssg-debian all 0.1.39-2 [156 kB]

Get:5 http://deb.debian.org/debian buster/main amd64 ssg-nondebian all 0.1.39-2 [5,254 kB]

Fetched 5,735 kB in 2s (2,953 kB/s)

Selecting previously unselected package ssg-base.

(Reading database ... 339782 files and directories currently installed.) Preparing to unpack .../ssg-base_0.1.39-2_all.deb ... Unpacking ssg-base (0.1.39-2) ... Selecting previously unselected package ssg-applications. Preparing to unpack .../ssg-applications_0.1.39-2_all.deb ... Unpacking ssg-applications (0.1.39-2) ... Selecting previously unselected package ssg-debderived. Preparing to unpack .../ssg-debderived_0.1.39-2_all.deb ... Unpacking ssg-debderived (0.1.39-2) ... Selecting previously unselected package ssg-debian. Preparing to unpack .../ssg-debian_0.1.39-2_all.deb ... Unpacking ssg-debian (0.1.39-2) ... Selecting previously unselected package ssg-nondebian. Preparing to unpack .../ssg-nondebian_0.1.39-2_all.deb ... Unpacking ssg-nondebian (0.1.39-2) ...

Setting up ssg-base (0.1.39-2) ... Setting up ssg-debderived (0.1.39-2) ... Setting up ssg-nondebian (0.1.39-2) ... Setting up ssg-debian (0.1.39-2) ... Setting up ssg-applications (0.1.39-2) ... Processing triggers for man-db (2.8.5-2) ...

Exploration

Listing the XCCDF files of a RHEL 6 datastream: oscap info /usr/share/xml/scap/ssg/content/ssg-rhel6-ds.xml, showing multiple xccdf's (profiles) and OVAL/OCILs (check files).

Document type: Source Data Stream Imported: 2018-07-26T16:58:28

Stream: scap_org.open-scap_datastream_from_xccdf_ssg-rhel6-xccdf-1.2.xml

Generated: (null) Version: 1.2

Checklists:

Ref-Id: scap_org.open-scap_cref_ssg-rhel6-xccdf-1.2.xml Status: draft Generated: 2018-07-26 Resolved: true

Profiles:
  • Title: United States Government Configuration Baseline (USGCB)
  • Id: xccdf_org.ssgproject.content_profile_usgcb-rhel6-server
  • Title: DISA STIG for Red Hat Enterprise Linux 6
  • Id: xccdf_org.ssgproject.content_profile_stig-rhel6-disa
  • Title: Standard System Security Profile for Red Hat Enterprise Linux 6
  • Id: xccdf_org.ssgproject.content_profile_standard
  • Title: Server Baseline
  • Id: xccdf_org.ssgproject.content_profile_server
  • Title: Red Hat Corporate Profile for Certified Cloud Providers (RH CCP)
  • Id: xccdf_org.ssgproject.content_profile_rht-ccp
  • Title: PCI-DSS v3 Control Baseline for Red Hat Enterprise Linux 6
  • Id: xccdf_org.ssgproject.content_profile_pci-dss
  • Title: CNSSI 1253 Low/Low/Low Control Baseline for Red Hat Enterprise Linux 6
  • Id: xccdf_org.ssgproject.content_profile_nist-CL-IL-AL
  • Title: FTP Server Profile (vsftpd)
  • Id: xccdf_org.ssgproject.content_profile_ftp-server
  • Title: FISMA Medium for Red Hat Enterprise Linux 6
  • Id: xccdf_org.ssgproject.content_profile_fisma-medium-rhel6-server
  • Title: Desktop Baseline
  • Id: xccdf_org.ssgproject.content_profile_desktop
  • Title: CSCF RHEL6 MLS Core Baseline
  • Id: xccdf_org.ssgproject.content_profile_CSCF-RHEL6-MLS
  • Title: Example Server Profile
  • Id: xccdf_org.ssgproject.content_profile_CS2
  • Title: C2S for Red Hat Enterprise Linux 6
  • Id: xccdf_org.ssgproject.content_profile_C2S
Referenced check files:
  • ssg-rhel6-oval.xml
  • system: http://oval.mitre.org/XMLSchema/oval-definitions-5
  • ssg-rhel6-ocil.xml
  • system: http://scap.nist.gov/schema/ocil/2
Ref-Id: scap_org.open-scap_cref_ssg-rhel6-pcidss-xccdf-1.2.xml Status: draft Generated: 2018-07-26 Resolved: true

Profiles:
  • Title: PCI-DSS v3 Control Baseline for Red Hat Enterprise Linux 6
  • Id: xccdf_org.ssgproject.content_profile_pci-dss_centric
  • Referenced check files:
    • ssg-rhel6-oval.xml
    • system: http://oval.mitre.org/XMLSchema/oval-definitions-5
    • ssg-rhel6-ocil.xml
    • system: http://scap.nist.gov/schema/ocil/2
Checks:
  • Ref-Id: scap_org.open-scap_cref_ssg-rhel6-oval.xml
  • Ref-Id: scap_org.open-scap_cref_ssg-rhel6-ocil.xml
  • Ref-Id: scap_org.open-scap_cref_ssg-rhel6-cpe-oval.xml
  • Ref-Id: scap_org.open-scap_cref_ssg-rhel6-oval.xml000
  • Ref-Id: scap_org.open-scap_cref_ssg-rhel6-ocil.xml000
Dictionaries: Ref-Id: scap_org.open-scap_cref_ssg-rhel6-cpe-dictionary.xml

Execution

The oscap command:
  • oscap -h
  • oscap -V
  • oscap [general-options] module operation [operation-options-and-arguments]
    • [general-options] : -h or -V
    • module :
      • info - to list information
      • xccfd - to evaluate or remediate against content
      • oval - to evaluate OVAL definitions against the system, collect information, ...
      • ds - to handle data streams
      • cpe - to check/match names
      • cvss - to calculate a score
      • cve
Sample listing of content of datastream: oscap info ssg-debian8-ds.xml, showing profiles:
  • Standard System Security Profile for Debian 8
  • Id: xccdf_org.ssgproject.content_profile_standard
  • ...
Listing content of a profile: oscap info ssg-debian8-ds.xml --profile xccdf_org.ssgproject.content_profile_standard - does not yield details.

Open file ssg-debian8-xccdf.xml to inspect it and you find:
  • Description (in prose)
  • Profile id="standard"
    • Title: 'Standard System Security Profile for Debian 8'
    • Statements such as 'select idref="sshd_set_idle_timeout" selected="true"
  • Profile id="anssi_np_nt28_restrictive"
    • Title: '...'
    • Select statements ...
  • Profile id="anssi_np_nt28_minimal"
  • Profile id="anssi_np_nt28_high"
  • Profile id="anssi_np_nt28_average"
  • Group id="remediation_functions" - definitions of actual remediation scripts
  • Group id="services" - definitions of how to remove services such as NIS

Illustration of 'xccdf eval':
  • cd /usr/share/xml/scap/ssg/content
  • oscap xccdf eval ssg-debian8-xccdf.xml --report debbybusterreport1
  • => not very relevant since I'm running Debian 10 - so I need a Debian 10 XCCDF - see SSG guides - but how to download?
  • Scap workbench, just 'scap-workbench' to execute it

Secure delete and privacy

shred

Linux manual removal of trash is done by e.g. rm -rf ~/.local/share/Trash/*

Source: http://techthrob.com/2009/03/howto-delete-files-permanently-and-securely-in-linux/

Debian etc come with the “shred” command. The basic format of the shred command is this: shred [OPTIONS] filename

Common options you’ll want to use when you shred a file are: -n [N] Overwrite a file N times. For example, -n 20 will perform twenty passes over the file’s contents. -u Remove the file after you’ve shredded it. You’ll probably want to use this option in most cases. -z After shredding a file with random bits (ones and zeros), overwrite the file with only zeros. This is used to try and hide the fact that the file was shredded. So, for example, to shred a file “topsecret.txt” with twenty-six iterations, and delete it afterwards, and hide the fact that it was shredded, run: shred -u -z -n 26 topsecret.txt

secure-delete tools (also directories)

Installation: apt-get install secure-delete

Commands: srm, smem, sfill, sswap

srm – secure remove

This tool is basically a more advanced version of the “shred” command. Instead of just overwriting your files with random data, it uses a special process – a combination of random data, zeros, and special values developed by cryptographer Peter Gutmann – to really, really make sure your files are irrecoverable. It will assign a random value for the filename, hiding that key piece of evidence. srm is used like this: srm myfile.txt Or, for directories: srm -r myfiles/ with the “-r” for recursive mode.

smem – secure memory wipe

While it’s true that your computer’s RAM is emptied when you power-off your computer, you probably didn’t know that residual traces of data remain in memory, like hard drives, until they are overwritten many times. This means that it’s relatively easy for someone with the right tools to figure out what you had stored in RAM, which may be the contents of important files, internet activity, or whatever else it is you do with your computer. The basic use of smem is the same as srm, although it is a good deal slower. There are options to speed things up, but they increase the risk by performing fewer overwrite passes. For a complete list of options, read the manual on smem (the man smem command), but its basic use is just running the “smem” command

sfill – secure free space wipe

sfill follows the same general method as srm. It is used to wipe all the free space on your disk, where past files have existed. This is particularly useful if you are getting rid of a hard disk for good; you can boot a LiveCD, delete everything on the disk, and then use sfill to make sure that nothing is recoverable. You may have to be root in order to use this tool effectively, since regular users might not have write access to certain filesystems, and you might have a quota enabled. sfill usage is: sfill mountpoint/ If you specify a directory that isn’t a mountpoint itself (for example, if you have /home/ on a separate partition, but you select /home/me/fun), sfill will wipe the freespace on which the directory resides (in the above example, the /home partition).

sswap – secure swap wipe

The sswap program is used to wipe your swap partitions, which store the data of running programs when your RAM is filled up. Therefore, feel a need to run smem, it’s probably a good idea to run sswap, too. However, before you use it you must disable your swap partition. You can determine your mounted swap devices by running: cat /proc/swaps Or looking in your /etc/fstab file for filesystems of the type “swap”. In my case, my swap partition is /dev/sda5, so to disable it I run: sudo swapoff /dev/sda5 Once your swap device is disabled, you can wipe it with sswipe. In my case, I run: sudo sswap /dev/sda5 If you aren’t running this as root (sudo), you’re likely to get a permission denied error. As with any of the above commands, you can get more information while it’s running by adding the “-v” option for verbose mode. Also, don’t forget to re-enable swap when you’re finished! Use the swapon command: sudo swapon /dev/sda5

Passes

A commonly asked question is, “how many passes does it take before a file can’t possibly be recovered by advanced tools, such as those used by law-enforcement? The answers here vary, and you can get a lot of extra information via google, but the basics are that the US Government’s standard is 7 passes, while data has been known to be recovered from as many as 14 passes. The “shred” tool allows you to specify the number of passes you wish to make, Secure-Delete tools use a default of 38 passes (enabling the “fast” and “lessen” options on the secure-delete tools significantly decreases the number of passes, however). Of course, more passes means more time, so there’s a trade-off here; depending on how private the data is, and how much time you have available, you may want to use a fewer or greater of passes.

Filesystems

Another thing to note is that RAID configurations and networked filesystems may affect the performance and effectiveness of these tools. Using a networked filesystem, for example, unless you can SSH into the remote computer, you can’t wipe the machine’s memory and swap. With RAID striping, there are more disks to consider, hence more redundant data traces, so you may want to consider doing a few extra passes. especially using the shred tool.

Privacy

Remember stuff will reside in /home/.cache/tumbnails directories.

Basic programming (vi, emacs, gcc, ... )

vi and beav

Remember it's
esc - : - w to write the file,
esc - : q to quit.

beav

Seems to be a hex editor. Check this out.
 

Emacs

OK. Works nicely. Question : how do I enter shell commands from within Emacs? Answer : esc - x - 'shell'. Then enter your shell commands. Question: how do I create special characters like the at-sign?
Answer: this seems to be depend whether you run under X or not...

Question: how do I display line numbers?
Answer: esc - x - 'line-number-mode' Question: how do I modify the size of the split windows?
Answer: Note that there is also a more sophisticated "xemacs".
 

gcc and gpc (C and Pascal)

Refer also to gnu tools info.

Use "man gcc / man gdb / man gpc". Check-out gcc.gnu.org . Note that "gcc -v" gives you your gcc basics. SuSE 7.2 comes with gcc 2.95.3 .

Compilation

ALTERNATIVE 1 Plain gcc compilation (direct invocation)

For example "gcc -v -o testy showenv.c" where:
  • -v = verbose
  • -o = is followed by the name of the executable
  • showenv.c = the sourcecode
Execution is by cd-ing into the directory and specifying the full path of the executable.

PROBLEM - Number Theory A Programmer's Guide. Copied source code to /CH1 and numtype.h to /usr/include. Had to change NUMTYPE.H into numtype.h . Then run into cc1plus problem (signalled as a 'gcc installation problem'). The gcc manual explains that cc1plus is the name of the compiler for C++. So what?

PROBLEM - Cryptography in C and C++ - gcc complains about missing flint.h and assert.h - copied them to /usr/include - OK but now whole list of "undefined references". SOLUTION - "gcc -v -o testrand testrand.c /flint/src/flint.c" i.e. statically link with flint.c itself. Other interesting gcc options include:
  • -E : don't compile, just preprocess
  • -S : save the intermediate assembly language (remove the -o flag then)
  • -c : create object files ending in .o
  • -L/src/local/lib : to link with library /src/local/lib

ALTERNATIVE 2 MAKE

For compiling, you can use a 'makefile', residing in the same directory as the sources and called 'make'.

More info on 'make' in next section.

See inside make file for usage.
  • ---contents sample 'make'-file ---
  • # makefile : compilation resulting in the executable 'showenv'
  • # execution of the makefile: "make showenv"
  • showenv:
  • gcc showenv.c -v -o showenv
  • --- end of contents of sample 'make'-file ---

In case of problems with make, you can try "make programname -d" (d for debug) - quid this a.out ---? It makes a lot of sense to use make with a prefix

ALTERNATIVE 3 Automake, autoconf, libtool

Can be downloaded and installed from gnu.org .

ALTERNATIVE 4 ANT

The Java way...

Execution

Execution of the program: "/Kassandra..../full-path/showenv"

configure and make

Refer to LW0400ITDEVtools.

Java

Debian approach

The 'GNU Software' utility does not do OS-level stuff or Java. So either use apt or Synaptic.
  • Install 'sudo apt-get install openjdk-17-jdk'
  • Uninstall: 'sudo apt auto-remove openjdk-17-jdk'
Files go mostly in /usr/lib and /usr/share. There's a lot of libraries and other stuff available. Installation log shows installation of:
  • Libraries: libice-dev libpthread-stubs0-dev libsm-dev libx11-dev libxau-dev libxcb1-dev libxdmcp-dev libxt-dev x11proto-dev xorg-sgml-doctools xtrans-dev
  • Two jdk packages: openjdk-17-jdk openjdk-17-jdk-headless

openjdk-17-jdk

Files included in openjdk-17-jdk are: /. /usr /usr/lib /usr/lib/jvm /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/java-17-openjdk-amd64/bin /usr/lib/jvm/java-17-openjdk-amd64/bin/jconsole /usr/lib/jvm/java-17-openjdk-amd64/include /usr/lib/jvm/java-17-openjdk-amd64/include/jawt.h /usr/lib/jvm/java-17-openjdk-amd64/include/linux /usr/lib/jvm/java-17-openjdk-amd64/include/linux/jawt_md.h /usr/lib/jvm/java-17-openjdk-amd64/man /usr/lib/jvm/java-17-openjdk-amd64/man/man1 /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jconsole.1.gz /usr/lib/jvm/openjdk-17 /usr/lib/jvm/openjdk-17/src.zip /usr/share /usr/share/doc /usr/share/doc/openjdk-17-jdk /usr/share/doc/openjdk-17-jre-headless /usr/share/doc/openjdk-17-jre-headless/test-amd64 /usr/share/doc/openjdk-17-jre-headless/test-amd64/jtreg_output-hotspot.log

openjdk-17-jdk-headless

Files included in openjdk-17-jdk-headless are: /. /usr /usr/lib /usr/lib/jvm /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/java-17-openjdk-amd64/bin /usr/lib/jvm/java-17-openjdk-amd64/bin/jar /usr/lib/jvm/java-17-openjdk-amd64/bin/jarsigner /usr/lib/jvm/java-17-openjdk-amd64/bin/javac /usr/lib/jvm/java-17-openjdk-amd64/bin/javadoc /usr/lib/jvm/java-17-openjdk-amd64/bin/javap /usr/lib/jvm/java-17-openjdk-amd64/bin/jcmd /usr/lib/jvm/java-17-openjdk-amd64/bin/jdb /usr/lib/jvm/java-17-openjdk-amd64/bin/jdeprscan /usr/lib/jvm/java-17-openjdk-amd64/bin/jdeps /usr/lib/jvm/java-17-openjdk-amd64/bin/jfr /usr/lib/jvm/java-17-openjdk-amd64/bin/jhsdb /usr/lib/jvm/java-17-openjdk-amd64/bin/jimage /usr/lib/jvm/java-17-openjdk-amd64/bin/jinfo /usr/lib/jvm/java-17-openjdk-amd64/bin/jlink /usr/lib/jvm/java-17-openjdk-amd64/bin/jmap /usr/lib/jvm/java-17-openjdk-amd64/bin/jmod /usr/lib/jvm/java-17-openjdk-amd64/bin/jps /usr/lib/jvm/java-17-openjdk-amd64/bin/jrunscript /usr/lib/jvm/java-17-openjdk-amd64/bin/jshell /usr/lib/jvm/java-17-openjdk-amd64/bin/jstack /usr/lib/jvm/java-17-openjdk-amd64/bin/jstat /usr/lib/jvm/java-17-openjdk-amd64/bin/jstatd /usr/lib/jvm/java-17-openjdk-amd64/bin/serialver /usr/lib/jvm/java-17-openjdk-amd64/include /usr/lib/jvm/java-17-openjdk-amd64/include/classfile_constants.h /usr/lib/jvm/java-17-openjdk-amd64/include/jdwpTransport.h /usr/lib/jvm/java-17-openjdk-amd64/include/jni.h /usr/lib/jvm/java-17-openjdk-amd64/include/jvmti.h /usr/lib/jvm/java-17-openjdk-amd64/include/jvmticmlr.h /usr/lib/jvm/java-17-openjdk-amd64/include/linux /usr/lib/jvm/java-17-openjdk-amd64/include/linux/jni_md.h /usr/lib/jvm/java-17-openjdk-amd64/jmods /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.base.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.compiler.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.datatransfer.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.desktop.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.instrument.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.logging.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.management.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.management.rmi.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.naming.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.net.http.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.prefs.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.rmi.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.scripting.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.se.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.security.jgss.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.security.sasl.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.smartcardio.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.sql.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.sql.rowset.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.transaction.xa.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.xml.crypto.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/java.xml.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.accessibility.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.attach.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.charsets.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.compiler.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.crypto.cryptoki.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.crypto.ec.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.dynalink.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.editpad.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.hotspot.agent.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.httpserver.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.incubator.foreign.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.incubator.vector.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.internal.ed.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.internal.jvmstat.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.internal.le.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.internal.opt.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.internal.vm.ci.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.internal.vm.compiler.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.internal.vm.compiler.management.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jartool.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.javadoc.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jcmd.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jconsole.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jdeps.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jdi.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jdwp.agent.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jfr.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jlink.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jpackage.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jshell.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jsobject.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.jstatd.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.localedata.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.management.agent.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.management.jfr.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.management.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.naming.dns.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.naming.rmi.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.net.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.nio.mapmode.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.random.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.sctp.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.security.auth.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.security.jgss.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.unsupported.desktop.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.unsupported.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.xml.dom.jmod /usr/lib/jvm/java-17-openjdk-amd64/jmods/jdk.zipfs.jmod /usr/lib/jvm/java-17-openjdk-amd64/lib /usr/lib/jvm/java-17-openjdk-amd64/lib/src.zip /usr/lib/jvm/java-17-openjdk-amd64/man /usr/lib/jvm/java-17-openjdk-amd64/man/man1 /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jar.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jarsigner.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/javac.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/javadoc.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/javap.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jcmd.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jdb.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jdeprscan.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jdeps.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jfr.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jhsdb.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jinfo.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jlink.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jmap.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jmod.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jps.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jrunscript.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jshell.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jstack.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jstat.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jstatd.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/serialver.1.gz /usr/share /usr/share/doc /usr/share/doc/openjdk-17-jdk-headless

openjdk-17-jre

Files included in openjdk-17-jre are: /. /usr /usr/lib /usr/lib/jvm /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/java-17-openjdk-amd64/lib /usr/lib/jvm/java-17-openjdk-amd64/lib/libatk-wrapper.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libawt_xawt.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjawt.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libsplashscreen.so /usr/share /usr/share/application-registry /usr/share/application-registry/openjdk-17-archive.applications /usr/share/applications /usr/share/doc /usr/share/doc/openjdk-17-jre /usr/share/icons /usr/share/icons/hicolor /usr/share/icons/hicolor/16x16 /usr/share/icons/hicolor/16x16/apps /usr/share/icons/hicolor/16x16/apps/openjdk-17.png /usr/share/icons/hicolor/24x24 /usr/share/icons/hicolor/24x24/apps /usr/share/icons/hicolor/24x24/apps/openjdk-17.png /usr/share/icons/hicolor/32x32 /usr/share/icons/hicolor/32x32/apps /usr/share/icons/hicolor/32x32/apps/openjdk-17.png /usr/share/icons/hicolor/48x48 /usr/share/icons/hicolor/48x48/apps /usr/share/icons/hicolor/48x48/apps/openjdk-17.png /usr/share/lintian /usr/share/lintian/overrides /usr/share/lintian/overrides/openjdk-17-jre /usr/share/mime-info /usr/share/mime-info/openjdk-17-archive.keys /usr/share/mime-info/openjdk-17-archive.mime /usr/share/pixmaps /usr/share/pixmaps/openjdk-17.xpm

openjdk-17-jre-headless

Files included in openjdk-17-jre-headless are: /. /etc /etc/java-17-openjdk /etc/java-17-openjdk/accessibility.properties /etc/java-17-openjdk/jfr /etc/java-17-openjdk/jfr/default.jfc /etc/java-17-openjdk/jfr/profile.jfc /etc/java-17-openjdk/jvm-amd64.cfg /etc/java-17-openjdk/logging.properties /etc/java-17-openjdk/management /etc/java-17-openjdk/management/jmxremote.access /etc/java-17-openjdk/management/management.properties /etc/java-17-openjdk/net.properties /etc/java-17-openjdk/psfont.properties.ja /etc/java-17-openjdk/psfontj2d.properties /etc/java-17-openjdk/security /etc/java-17-openjdk/security/blocked.certs /etc/java-17-openjdk/security/default.policy /etc/java-17-openjdk/security/java.policy /etc/java-17-openjdk/security/java.security /etc/java-17-openjdk/security/nss.cfg /etc/java-17-openjdk/security/policy /etc/java-17-openjdk/security/policy/README.txt /etc/java-17-openjdk/security/policy/limited /etc/java-17-openjdk/security/policy/limited/default_US_export.policy /etc/java-17-openjdk/security/policy/limited/default_local.policy /etc/java-17-openjdk/security/policy/limited/exempt_local.policy /etc/java-17-openjdk/security/policy/unlimited /etc/java-17-openjdk/security/policy/unlimited/default_US_export.policy /etc/java-17-openjdk/security/policy/unlimited/default_local.policy /etc/java-17-openjdk/security/public_suffix_list.dat /etc/java-17-openjdk/sound.properties /etc/java-17-openjdk/swing.properties /usr /usr/lib /usr/lib/debug /usr/lib/debug/usr /usr/lib/debug/usr/lib /usr/lib/debug/usr/lib/jvm /usr/lib/debug/usr/lib/jvm/java-1.17.0-openjdk-amd64 /usr/lib/debug/usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm /usr/lib/jvm/.java-1.17.0-openjdk-amd64.jinfo /usr/lib/jvm/java-1.17.0-openjdk-amd64 /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/java-17-openjdk-amd64/bin /usr/lib/jvm/java-17-openjdk-amd64/bin/java /usr/lib/jvm/java-17-openjdk-amd64/bin/jpackage /usr/lib/jvm/java-17-openjdk-amd64/bin/keytool /usr/lib/jvm/java-17-openjdk-amd64/bin/rmiregistry /usr/lib/jvm/java-17-openjdk-amd64/conf /usr/lib/jvm/java-17-openjdk-amd64/conf/accessibility.properties /usr/lib/jvm/java-17-openjdk-amd64/conf/logging.properties /usr/lib/jvm/java-17-openjdk-amd64/conf/management /usr/lib/jvm/java-17-openjdk-amd64/conf/management/jmxremote.access /usr/lib/jvm/java-17-openjdk-amd64/conf/management/management.properties /usr/lib/jvm/java-17-openjdk-amd64/conf/net.properties /usr/lib/jvm/java-17-openjdk-amd64/conf/security /usr/lib/jvm/java-17-openjdk-amd64/conf/security/java.policy /usr/lib/jvm/java-17-openjdk-amd64/conf/security/java.security /usr/lib/jvm/java-17-openjdk-amd64/conf/security/nss.cfg /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/README.txt /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/limited /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/limited/default_US_export.policy /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/limited/default_local.policy /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/limited/exempt_local.policy /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/unlimited /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/unlimited/default_US_export.policy /usr/lib/jvm/java-17-openjdk-amd64/conf/security/policy/unlimited/default_local.policy /usr/lib/jvm/java-17-openjdk-amd64/conf/sound.properties /usr/lib/jvm/java-17-openjdk-amd64/conf/swing.properties /usr/lib/jvm/java-17-openjdk-amd64/docs /usr/lib/jvm/java-17-openjdk-amd64/legal /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/aes.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/asm.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/c-libutl.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/cldr.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/icu.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/public_suffix.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.base/unicode.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.compiler /usr/lib/jvm/java-17-openjdk-amd64/legal/java.compiler/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.datatransfer /usr/lib/jvm/java-17-openjdk-amd64/legal/java.datatransfer/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.desktop /usr/lib/jvm/java-17-openjdk-amd64/legal/java.desktop/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.desktop/colorimaging.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.desktop/mesa3d.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.desktop/xwd.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.instrument /usr/lib/jvm/java-17-openjdk-amd64/legal/java.instrument/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.logging /usr/lib/jvm/java-17-openjdk-amd64/legal/java.logging/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.management /usr/lib/jvm/java-17-openjdk-amd64/legal/java.management.rmi /usr/lib/jvm/java-17-openjdk-amd64/legal/java.management.rmi/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.management/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.naming /usr/lib/jvm/java-17-openjdk-amd64/legal/java.naming/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.net.http /usr/lib/jvm/java-17-openjdk-amd64/legal/java.net.http/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.prefs /usr/lib/jvm/java-17-openjdk-amd64/legal/java.prefs/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.rmi /usr/lib/jvm/java-17-openjdk-amd64/legal/java.rmi/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.scripting /usr/lib/jvm/java-17-openjdk-amd64/legal/java.scripting/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.se /usr/lib/jvm/java-17-openjdk-amd64/legal/java.se/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.security.jgss /usr/lib/jvm/java-17-openjdk-amd64/legal/java.security.jgss/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.security.sasl /usr/lib/jvm/java-17-openjdk-amd64/legal/java.security.sasl/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.smartcardio /usr/lib/jvm/java-17-openjdk-amd64/legal/java.smartcardio/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.smartcardio/pcsclite.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.sql /usr/lib/jvm/java-17-openjdk-amd64/legal/java.sql.rowset /usr/lib/jvm/java-17-openjdk-amd64/legal/java.sql.rowset/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.sql/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.transaction.xa /usr/lib/jvm/java-17-openjdk-amd64/legal/java.transaction.xa/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml.crypto /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml.crypto/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml.crypto/santuario.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml/bcel.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml/dom.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml/jcup.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml/xalan.md /usr/lib/jvm/java-17-openjdk-amd64/legal/java.xml/xerces.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.accessibility /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.accessibility/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.attach /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.attach/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.charsets /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.charsets/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.compiler /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.compiler/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.crypto.cryptoki /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.crypto.cryptoki/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.crypto.cryptoki/pkcs11cryptotoken.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.crypto.cryptoki/pkcs11wrapper.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.crypto.ec /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.crypto.ec/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.dynalink /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.dynalink/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.dynalink/dynalink.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.editpad /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.editpad/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.hotspot.agent /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.hotspot.agent/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.httpserver /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.httpserver/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.incubator.foreign /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.incubator.foreign/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.incubator.vector /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.incubator.vector/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.ed /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.ed/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.jvmstat /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.jvmstat/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.le /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.le/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.le/jline.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.opt /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.opt/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.opt/jopt-simple.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.vm.ci /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.vm.ci/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.vm.compiler /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.vm.compiler.management /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.vm.compiler.management/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.internal.vm.compiler/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jartool /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jartool/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.javadoc /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.javadoc/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.javadoc/jquery.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.javadoc/jqueryUI.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jcmd /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jcmd/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jconsole /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jconsole/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jdeps /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jdeps/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jdi /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jdi/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jdwp.agent /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jdwp.agent/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jfr /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jfr/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jlink /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jlink/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jpackage /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jpackage/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jshell /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jshell/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jsobject /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jsobject/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jstatd /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.jstatd/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.localedata /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.localedata/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.localedata/cldr.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.localedata/thaidict.md /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.management /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.management.agent /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.management.agent/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.management.jfr /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.management.jfr/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.management/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.naming.dns /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.naming.dns/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.naming.rmi /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.naming.rmi/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.net /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.net/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.nio.mapmode /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.nio.mapmode/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.random /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.random/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.sctp /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.sctp/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.security.auth /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.security.auth/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.security.jgss /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.security.jgss/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.unsupported /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.unsupported.desktop /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.unsupported.desktop/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.unsupported/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.xml.dom /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.xml.dom/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.zipfs /usr/lib/jvm/java-17-openjdk-amd64/legal/jdk.zipfs/ASSEMBLY_EXCEPTION /usr/lib/jvm/java-17-openjdk-amd64/lib /usr/lib/jvm/java-17-openjdk-amd64/lib/classlist /usr/lib/jvm/java-17-openjdk-amd64/lib/ct.sym /usr/lib/jvm/java-17-openjdk-amd64/lib/jar.binfmt /usr/lib/jvm/java-17-openjdk-amd64/lib/jexec /usr/lib/jvm/java-17-openjdk-amd64/lib/jfr /usr/lib/jvm/java-17-openjdk-amd64/lib/jfr/default.jfc /usr/lib/jvm/java-17-openjdk-amd64/lib/jfr/profile.jfc /usr/lib/jvm/java-17-openjdk-amd64/lib/jrt-fs.jar /usr/lib/jvm/java-17-openjdk-amd64/lib/jspawnhelper /usr/lib/jvm/java-17-openjdk-amd64/lib/jvm.cfg /usr/lib/jvm/java-17-openjdk-amd64/lib/jvm.cfg-default /usr/lib/jvm/java-17-openjdk-amd64/lib/libattach.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libawt.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libawt_headless.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libdt_socket.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libextnet.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libfontmanager.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libinstrument.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libj2gss.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libj2pcsc.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libj2pkcs11.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjaas.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjava.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjavajpeg.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjdwp.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjimage.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjli.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjsig.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjsound.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libjsvml.so /usr/lib/jvm/java-17-openjdk-amd64/lib/liblcms.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libmanagement.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libmanagement_agent.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libmanagement_ext.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libmlib_image.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libnet.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libnio.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libprefs.so /usr/lib/jvm/java-17-openjdk-amd64/lib/librmi.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libsaproc.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libsctp.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libsyslookup.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libverify.so /usr/lib/jvm/java-17-openjdk-amd64/lib/libzip.so /usr/lib/jvm/java-17-openjdk-amd64/lib/modules /usr/lib/jvm/java-17-openjdk-amd64/lib/psfont.properties.ja /usr/lib/jvm/java-17-openjdk-amd64/lib/psfontj2d.properties /usr/lib/jvm/java-17-openjdk-amd64/lib/security /usr/lib/jvm/java-17-openjdk-amd64/lib/security/blocked.certs /usr/lib/jvm/java-17-openjdk-amd64/lib/security/cacerts /usr/lib/jvm/java-17-openjdk-amd64/lib/security/default.policy /usr/lib/jvm/java-17-openjdk-amd64/lib/security/public_suffix_list.dat /usr/lib/jvm/java-17-openjdk-amd64/lib/server /usr/lib/jvm/java-17-openjdk-amd64/lib/server/classes.jsa /usr/lib/jvm/java-17-openjdk-amd64/lib/server/classes_nocoops.jsa /usr/lib/jvm/java-17-openjdk-amd64/lib/server/libjsig.so /usr/lib/jvm/java-17-openjdk-amd64/lib/server/libjvm.so /usr/lib/jvm/java-17-openjdk-amd64/lib/tzdb.dat /usr/lib/jvm/java-17-openjdk-amd64/man /usr/lib/jvm/java-17-openjdk-amd64/man/man1 /usr/lib/jvm/java-17-openjdk-amd64/man/man1/java.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/jpackage.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/keytool.1.gz /usr/lib/jvm/java-17-openjdk-amd64/man/man1/rmiregistry.1.gz /usr/lib/jvm/java-17-openjdk-amd64/release /usr/share /usr/share/binfmts /usr/share/doc /usr/share/doc/openjdk-17-jre-headless /usr/share/doc/openjdk-17-jre-headless/JAVA_HOME /usr/share/doc/openjdk-17-jre-headless/README.Debian /usr/share/doc/openjdk-17-jre-headless/README.alternatives /usr/share/doc/openjdk-17-jre-headless/changelog.Debian.gz /usr/share/doc/openjdk-17-jre-headless/copyright /usr/share/gdb /usr/share/gdb/auto-load /usr/share/gdb/auto-load/usr /usr/share/gdb/auto-load/usr/lib /usr/share/gdb/auto-load/usr/lib/jvm /usr/share/gdb/auto-load/usr/lib/jvm/java-17-openjdk-amd64 /usr/share/gdb/auto-load/usr/lib/jvm/java-17-openjdk-amd64/jre /usr/share/gdb/auto-load/usr/lib/jvm/java-17-openjdk-amd64/jre/lib /usr/share/gdb/auto-load/usr/lib/jvm/java-17-openjdk-amd64/jre/lib/server /usr/share/gdb/auto-load/usr/lib/jvm/java-17-openjdk-amd64/jre/lib/server/libjvm.so-gdb.py /usr/share/lintian /usr/share/lintian/overrides /usr/share/lintian/overrides/openjdk-17-jre-headless

ca-certificate-java

This package contains: /. /etc /etc/ca-certificates /etc/ca-certificates/update.d /etc/ca-certificates/update.d/jks-keystore /etc/default /etc/default/cacerts /etc/ssl /etc/ssl/certs /etc/ssl/certs/java /usr /usr/share /usr/share/ca-certificates-java /usr/share/ca-certificates-java/ca-certificates-java.jar /usr/share/doc /usr/share/doc/ca-certificates-java /usr/share/doc/ca-certificates-java/NEWS.Debian.gz /usr/share/doc/ca-certificates-java/README.Debian /usr/share/doc/ca-certificates-java/changelog.gz /usr/share/doc/ca-certificates-java/copyright

Other

Furthermore Synactic shows many other java-related packages.

IntelliJ IDEA approach

See here.

Fact finding

Debian basic installation

Use Synaptic to see what's there as base installation.

What answers to the java command

Fact finding
  • 'which java'
  • /usr/bin/java (if there is no PATH in .bashrc ('# PATH=/home/marc/.jdks/openjdk-18.0.2/bin:$PATH'))
  • OR
  • openjdk version "18.0.2" 2022-07-19 (if there is such a PATH in .bashrc)

Which version of java is this

You can do:
  • 'java -version' (will match your PATH)
  • openjdk version "18.0.2" 2022-07-19
  • OpenJDK Runtime Environment (build 18.0.2+9-61)
  • OpenJDK 64-Bit Server VM (build 18.0.2+9-61, mixed mode, sharing)

Basics - path and classpath

Info: https://docs.oracle.com/javase/tutorial/essential/environment/paths.html

Path

You should set the path variable if you want to be able to run the executables (javac, java, javadoc, and so on) from any directory without having to type the full path of the command. If you do not set the PATH variable, you need to specify the full path to the executable every time you run it.

To find out if the path is properly set, execute: 'java -version' or 'which java'

This will print the version of the java tool, if it can find it. If the version is old or you get the error java: Command not found, then the path is not properly set.

To set the path permanently, set the path in your startup file. For bash, edit the startup file (~/.bashrc) with your actual path:

PATH=/usr/local/jdk1.7.0/bin:$PATH

export PATH
Reboot.

Classpath

The CLASSPATH variable is one way to tell applications, including the JDK tools, where to look for user classes. (Classes that are part of the JRE, JDK platform, and extensions should be defined through other means, such as the bootstrap class path or the extensions directory.)

The preferred way to specify the class path is by using the -cp command line switch. This allows the CLASSPATH to be set individually for each application without affecting other applications. Setting the CLASSPATH can be tricky and should be performed with care.

The default value of the class path is ".", meaning that only the current directory is searched. Specifying either the CLASSPATH variable or the -cp command line switch overrides this value.

On Solaris or Linux, execute: >echo $CLASSPATH

If CLASSPATH is not set you will get a CLASSPATH: Undefined variable error (Solaris or Linux)

To modify the CLASSPATH, use the same procedure you used for the PATH variable.

Class path wildcards allow you to include an entire directory of .jar files in the class path without explicitly naming them individually. For more information, including an explanation of class path wildcards, and a detailed description on how to clean up the CLASSPATH environment variable, see the Setting the Class Path technical note.

LEGACY: You can set the CLASSPATH variable using: export CLASSPATH=/path/to/your/classes, or you can use -cp option to run your class: java -cp /path/to/your/classes foo.bar.MainClass.

Execution of a jar

See https://stackoverflow.com/questions/1238145/how-to-run-a-jar-file

Essentially: 'java -jar test.jar'

IntelliJ on GrayTiger Debian 11 Bullseye

Comes in two versions:
  • IntelliJ IDEA Ultimate: commercial edition for JVM, web, and enterprise development. Includes the features of the Community edition, plus support for languages that other IntelliJ platform-based IDEs focus on, as well as support for a variety of server-side and front-end frameworks, application servers, integration with database and profiling tools, and more.
  • IntelliJ IDEA Community Edition.

Install/remove on GrayTiger

Installation
Downloaded from https://www.jetbrains.com/idea/download/. Comes with OpenJDK 18, which gets installed in /home/marc/.jdks/openjdk-18.0.2/bin. Includes javac, keytool, etc.
  1. Unpacked the IntelliJ IDEA distribution archive into /Downloads/.... This location is the {installation home}.
  2. To start the application, open a console, cd into "{installation home}/bin" and type: ./idea.sh. This will initialize various configuration files in the configuration directory: ~/.config/JetBrains/IdeaIC2022.2
  3. [OPTIONAL] Add "{installation home}/bin" to your PATH environment variable so that you can start IntelliJ IDEA from any directory/li>
  4. [OPTIONAL] To adjust the value of the JVM heap size, create a file idea.vmoptions (or idea64.vmoptions if using a 64-bit JDK) in the configuration directory and set the -Xms and -Xmx parameters. To see how to do this, you can reference the vmoptions file under "{installation home}/bin" as a model but do not modify it, add your options to the new file.
  5. [OPTIONAL] Change the location of the "config" and "system" directories - By default, IntelliJ IDEA stores all your settings in the ~/.config/JetBrains/IdeaIC2022.2 directory and uses ~/.local/share/JetBrains/IdeaIC2022.2 as a data cache. To change the location of these directories:
    • Open a console and cd into ~/.config/JetBrains/IdeaIC2022.2
    • Create a file idea.properties and set the idea.system.path and idea.config.path variables, for example:
      • idea.system.path=~/custom/system
      • idea.config.path=~/custom/config
Removal
As per https://www.jetbrains.com/help/idea/uninstall.html#1e05eb06
  • Delete the installation directory (GrayTiger: ~/Downloads/idea-IC-222.3345.118/)
  • Remove the following directories:
    • ~/.config/JetBrains/
    • ~/.cache/JetBrains/
    • ~/.local/share/JetBrains/
What Jetbrains forgets to mention:
  • As I installed OpenJDK 18, this must be manually removed (if you wish) from /home/marc/.jdks/openjdk-18.0.2/.
  • There exists /home/marc/.java which contains e.g. fonts for JDK 17 and 18

Start-up

Start as marc@GrayTiger:~/Downloads/IDEA/idea-IC-222.3345.118/bin$ ./idea.sh

Getting started

Help at Jetbrain resources. Particularly see your first java application.

Configuration

JDK
Open 'File/Project structure' and select the JDK under 'Project settings' or 'Platform settings'.
Libraries
See https://www.jetbrains.com/help/idea/library.html.

Can be defined at three levels: global (available for many projects), project (available for all modules within a project), and module (available for one module).
Projects
Create via wizard. Coding conventions e.g. for package name such as com.example.helloworld.HelloWorld Respect the Java naming conventions. Projects go in /IdeaProjects and include:
  • GTTI - building on TI - incomplete
  • GLEIF-TI
  • EU DSS - programming to understand e.g. validation
  • Springkasteel - BouncyCastle

IntelliJ creates the .iml file and the .idea directory that keeps project settings.
  • /home/marc/IdeaProjects/GTTI-prj/GTTI-prj.iml - XML config data
  • /home/marc/IdeaProjects/GTTI-prj/.idea - here you find a pointer to the xalan libraries

  • /home/marc/Downloads/bc/bctest/bctest.iml - XML config data
  • /home/marc/Downloads/bc/bctest/.idea
You can open a project by opening its .iml file.

Libraries can be added as described here. Can be defined at three levels:
  • global (available for many projects),
  • project (available for all modules within a project),
  • module (available for one module).
GLEIF-TI project
TL2RDF_BE_EvSPs_v401.java TL2RDF_BE_EvSPs_v401.xsl
Run/debug configuration
As per https://www.jetbrains.com/help/idea/run-debug-configuration.html A run/debug configuration is temporary or permanent. Access
  • In program source right-click on main - 'Modify run configuration'
    • Here you can add eg CLI parameters
  • Open 'Run' and select 'Edit configurations'.
    • Here you can edit existing run/debug configurations

Bouncy Castle test programs

BC Example Code: look at the test programs in the packages:
  • org.bouncycastle.crypto.test
  • org.bouncycastle.jce.provider.test
  • org.bouncycastle.cms.test
  • org.bouncycastle.mail.smime.test
  • org.bouncycastle.openpgp.test
  • org.bouncycastle.tsp.test
There are also some specific example programs for dealing with Attribute Certificates, PKCS12, SMIME and OpenPGP:
  • org.bouncycastle.mail.smime.examples
  • org.bouncycastle.openpgp.examples
On GrayTiger: /home/marc/Downloads/bc/bctest/org/bouncycastle/crypto/test contains classfiles - Nautilus is no great help, opening them in IntelliJ starts a decompiler and shows you the sources. This is a plug-in, https://www.jetbrains.com/help/idea/decompiler.html - installed in /home/marc/Downloads/idea-IC-222.3345.118/plugins However, it stopped working ...

Legacy - Eclipse on GrayTiger Debian 11 Bullseye

Downloaded in /home/marc/Downloads and installer executed. Uninstalled by deleting the /home/marc/eclipse and ./eclipse folders and the eclipse-workspace.

Legacy on Debian 8 Jessie

Type 'java' or 'java -showversion' at the command prompt. Or query Synaptics which tells you a default-jre is installed, which is openjdk-7-jre for amd64. There is also a plug-in (icedtea-7-plugin) which is for browsers.

Debian Java info can be found at https://wiki.debian.org/Java/. There are also openjdk-8 - openjdk-11 available.

To run a recent Eclipse IDE, you need Java 8. How: https://linuxconfig.org/install-the-latest-eclipse-java-ide-on-debian-8-and-ubuntu-16-04-linux

Java 8 was not made available to Debian users when Jessie was first launched, but it is available through the jessie-backports repository. Follow the instructions at linuxconfig.org

Legacy

JDK1.1.3 on toothbrush (SuSE 5.3)

Be aware that there are many alternatives to run Java on Linux. This includes the jdk port from blackdown.org, guavac, kaffe, tya etc. went for jdk113, included with SuSE 5.3. This brings along:
  • the java runtime (i.e. classes.zip)
  • java sources for public classes (i.e. src.zip)
  • tools such as javac, java etc.
  • documentation & demos
Key troubleshooting to get jdk113 running:
  1. You need the right PATH statement. ">which java" results in ">/usr/lib/java/bin/java". Mind you, "java" is just a wrapper script, locating and starting the right binaries. Apparently, PATH gets set in /etc/profile.
  2. You need the right CLASSPATH statement. What is your current CLASSPATH's value? ">echo $CLASSPATH". If nothing comes back, the variable is not set.
    • Fixing the classpath for the jdk itself I ran ">java -v(erbose) -classpath /Java/nsm1.class", resulting in "unable to find java/lang/threads". (((with hindsight: note you will now overwrite the standard classpath - so the regular classes are gone))) This is a fundamental problem, you don't find the class to create the first thread. Apparently the wrapper does not look into the right location /usr/lib/jdk113 where YaST put the binaries. So the solution is to ">export CLASSPATH=/usr/lib/jdk1.1.3:/Java". You set first the jdk binaries, and then where java can find your own classfiles.
    • Fixing "Can't find class nsm1.class" This means you made a naming mistake (or you have a classpath problem). The same name should be used 3 times:
      • for your sourcefile (foo.java),
      • for your main (inside that sourcefile) and
      • for the classfile (foo.class).
Compile with
  • >"export CLASSPATH=/usr/lib/java-----:/YourOwnAddition...."
  • >"cd /Java" (change to the dir where your .jave file resides)
  • >"javac nsm1.java" (note that you have to specify the .java extension)

Check that you indeed have a brandnew compilation e.g. with "ls -l" Run with ">java nsm1" (note the lack of the .class extension).

JDK1.1.3 - a word about Applets

Running the appletviewer: Set the classpath (refer to above). Then ">appletviewer HelloWorldApplet.html"
  Adding an applet to your webpage: Which applets does Sun provide to play with? Demos go (discovered through YaST) in /usr/doc/packages/javadoc/demo. Just open the html files there.
 

JDK1.1.3 - a word about Security

Security settings are defined in /usr/lib/jdk1.1.3/lib/security/java.security
 
 

JDK1.1.7 on malekh (SuSE 6.1)

Documentation in:
  • /usr/doc/packages/java: SuSE 6.1 comes with Blackdown's jdk117, includes Metro Link's Motif - contains a README.linux - and an interesting index (pointing to the standard jdk117 README)
  • /usr/doc/packages/javadoc: all the Sun documentation and demos
  • /usr/doc/packages/javarunt: info about the run time environment
Installation done as part of the overall YaST installation, and:
  • "which java" results in "/usr/lib/java/bin/java"
  • "java -version" results in "version 1.1.7".
On CLASSPATH: Java(c) on Linux runs via a 'wrapper' script, located in e.g. "/usr/lib/java/bin/javac ---> .java_wrapper". The wrapper checks (if [-z "$CLASSPATH ...)" whether the CLASSPATH had been set already, and always appends his stuff to what was already set. So if you want to add your own classfiles for IMPORT statements: set CLASSPATH and export it. Problem-1: I set my classpath, but it seems to go unnoticed to javac. Solution-1: Careful: if /JavaSamples/CoreJavaVol1+2/corejava is a directory containing useful classes such as CloseableFrame, then set the classpath just above it:
  1. "CLASSPATH=/JavaSamples/CoreJavaVol1+2" (setting the classpath too deep results in not finding your imports...)
  2. alternatively, you can also append more: "CLASSPATH=/JavaNSMsec:/JavaSamples/CoreJavaVol1+2"
  3. "export CLASSPATH"
  4. "env" shows you the value of your environment variables, include CLASSPATH
  5. "sh -x javac myprogram.java" will show the wrapper's substitution of CLASSPATH
Problem-2: I set my classpath, but classes in my current working directory are no longer accessible now. Solution-2: explicitly include ".:" when setting the classpath:
  1. "CLASSPATH=/JavaSamples/CoreJavaVol1+2:/JavaNSMsec" (no leading ".")
  2. "export CLASSPATH" - you can use "env" to check...
  3. "javac myprogram.java" or "sh -x javac myprogram.java"
  4. now again explicitly set CLASSPATH, with a leading ".": "CLASSPATH=.:/JavaSamples/CoreJavaVol1+2:/JavaNSMsec"
  5. "java myprogram" or "sh -x java myprogram"
You can also modify "/usr/lib/java/bin/.java_wrapper" to obtain some more feedback. For NSM:
  1. "cd /JavaNSM"
  2. "javac master09.java" or
  3. "sh -x [/usr/lib/java/bin/]javac nsm9.java" to see substitutions in the wrapper
  4. "ls -l" will show the timestamp of the .class file
  5. "java master09" (or "sh -x java master09")
CRYPTIX ---> ref to the crypto software (including how to compile a package). APPLETS: for O'Reilly's "Java in a nutshell": chapter 6. The FirstApplet.java resides in "/JavaSamples/SampeNutshell/ch06/FirstApplet.java". I created the necesarry html as: "" This runs smoothly, and you can check out the Java console of Navigator to see what happens. Here you see that Navigator 4.51 runs Java 1.1.5 (only).

JDK1.1.7v3 on avina

Basic documentation in '/usr/doc/packages/java'. Blackdown1.1.7v3. Oddly enough, there are both:
  • /usr/lib/jdk1.1.7 (from package 'java')
  • /usr/lib/jdk1.1.8 (from packages 'ibmjdk' and 'ibmjre')
Which is in use? Running 'env' shows I have /usr/lib/java/bin' in my PATH. Running 'java -version' shows I use '1.1.8'. YaST shows that 1.1.8 comes from package ibmjdk & ibmjre. More info on www.ibm.com/java/jdk/118/linux. As you can see in /usr/lib/jdk1.1.8, there are goodies added such as javap (disassembly). Further down the tree you'll find property files and the java.security file. Checkout: java support in the Linux kernel: "/usr/src/linux.../documentation/java.txt"

Java2 on tux (SuSE 7.0)

Installation of various Java components done as part of the overall YaST2 installation. xrpm tells me we now have:
  • standard:
    • JAVA2: java2-1.2.2-7 (Java2 SDK - Standard Edition v1.2.2) - /usr/lib/jdk1.2.2/bin/javac and /jre/bin/... (keytool, ...)
    • - /usr/share/doc/packages/java2
    • * which makes me conclude we don't have any 'standard extensions' such as javax.swing ... are they available for Linux?
    • JAVA1: java-1.1.8v1-2 (older JDK1.1.8)
    • JAVA1: javadoc-1.1.3-43
    • JAVA1: javarunt-1.1.7v3-19
  • from IBM:
    • jikes-1.06-119 - IBM Jikes compiler, http://ibm.com/developerworks/opensource
    • ibmjava2-1.3-8 - /usr/share/doc/packages/ibmjava2 ("Sun's Java 1.3 - J2SE") => even higher than Sun's 1.2???
    • ibmjre2-1.3-8 - /usr/share/doc/packages/ibmjre2
    • ibmjaas-1.3-8 - /usr/share/doc/packages/ibmjaas
    • ibmjcom-1.3-8 - /usr/share/doc/packages/ibmjcom
  • other goodies:
    • jakarta-3.1-31
    • jserv-1.1.2-25
    • jtools-...
What do we have:
  • "which java" results in "/usr/lib/java/bin/java" (same as with previous versions, actually a wrapper to .java_wrapper which is a shell script provided by Sun)
  • "java -version" results in "version 1.2.2","Classic VM (build 1.2.2-L, green threads, nojit)"
Which version is this? Sun/Blackdown? IBM? Most likely Sun/Blackdown. Some investigation: Java2 demo's:
  1. Java2D demo's:
    • "cd /usr/share/doc/packages/java2/demo/jfc/Java2D" (exact path may vary)
    • "java -jar Java2Demo.jar" ===> nice demo

    • "cd /usr/share/doc/packages/java2/demo/applets/MoleculeViewer"
    • "appletviewer example3.html"
  2. SwingSet demo's:
    • "cd /usr/share/doc/packages/java2/demo/jfc/SwingSet"
    • "java -jar SwingSet.jar" ===> nice demo
  3. "appletviewer /usr/share/doc/packages/IBMJava2-SDK/jfc/demo/SwingSet2/SwingSet2.html"
  4. Others: Metalworks, SwingApplet, ...
* Remark * Netscape 4.74 supplied with SuSE 7.0 still only runs jdk115. However, a plug-in allows to run Java2 programs. The plug-is is provided by Sun for Win32, Linux is under development.

Java2 on imagine2 (SuSE 7.2)

FIRST TRY Installation of basic JDK and JRE done as part of the overall YaST2 installation. Running "java -version" tells me I have "java 1.3.0". xrpm tells me "java2 1.3-46" resides in "/usr/lib/jdk1.3". Doc and demos in "/usr/share/doc/packages/java2". SECOND TRY Yast2 installation of Java does not result in a working "javac" or "which java". So I did: "PATH=$PATH:/usr/lib/jdk1.3/bin" and "export PATH". Then OK.

Prerequisites for J2EE: J2SE 1.3.1 (not included in SuSE 7.2)

Swing

See JTK1.html .

Java LDAP SDK

From the "LDAP programming in Java" book. The actual SDK classes reside in /packages/ldapjdk.jar and /packages/ldapfilt.jar. These must be included in the CLASSPATH. Useful programs include /src/netscape/ldap/tools/LDAPsearch.java etc. Usage e.g. "java LDAPSearch -h memberdir.netscape.com -b" "ou=member_directory, o=netcenter.com" "cn=tony d*"

JBuilder V3.5

Note that having Java2 installed is a prerequisite.

The JIT

Move javacomp... file from CD to /. Run "tar xvfz ....", which results in /javacomp-1.2.15. Now you have to copy libjavacomp.so to the jre directory. Use xrpm to find this jre directory: probably /usr/lib/jdk1.2.2/jre/lib/i386. From now on, you can use the JIT by specifying flags on javac / java: Quote from README.TXT: To use the JBuilder JIT for Linux you can either set the environment variable JAVA_COMPILER to javacomp (e.g export JAVA_COMPILER=javacomp if you are running bash) or you can set the JDK system property when you invoke the java runtime: java -Djava.compiler=javacomp HelloWorld to run HelloWorld using the JBuilder JIT for Linux or javac -J-Djava.compiler=javacomp HelloWorld.java to use the JBuilder JIT for Linux with javac Unquote.

JBuilder - install

Follow instructions. Into /usr/local/jbuilder35. Also installed JDatastore, a DBMS, the JBuilder documentation, the samples. Installated the OpenTools documentation into /usr/local/jbuilder35/opentoolsdoc as well. Running: unclear how to start from CLI, but an entry was added in KDE's personal settings. First start-up required to enter licensekey. Running JBuilder and JDataStore goes fine.

JBuilder - components

There is:
  1. JBuilder
  2. JDatastore
  3. Documentation
  4. Samples
  5. OpenTools

Where does it live:
  • usr/local/jbuilder35: the basics
  • root/.jbuilder: properties, license info, ...
  • root/.jdatastore: datastore properties
  • root/jbproject/ the individual projects (ref infra)
Further details can be found in jBuilderToolKit.html

mySQL

IV.109.1 What have we got..

The manual is found here Following directories are used:
  • /usr/share/doc/packages/mysql - documentation
  • /var/mysql - databases and logfiles
  • /usr/bin - programs such as
    • mysql - the default client program to connect to the mysqld
    • mysql_install_db - installation script
    • mysql_findrows, zap, dump, import, dbug, show, acces, ...
  • /usr/sbin/mysqld - server program
  • /usr/share/mysql - misc additional files e.g. language extensions
  • /usr/include/mysql - include files
  • /usr/lib/mysql - static libs
  • /usr/lib/mysqlclient.so* - runtime libs
  • /usr/share/sql-bench - benchmarks - testsuite
  • kmysqlad (KDE-admin)
  • kmysql

IV.109.2 Completing the installation

Execute "mysql_install_db", which results in creation of 6 tables: db, host, user, func, tables_priv, columns_priv in /var/mysql. Provided a password (vwp91) via "mysqladmin -u root -h localhost -password vwp91 -p". Apparently this failed since the server was not yet running. Start the server via "safe_mysqld &". You can now e.g.
  • mysqladmin status
  • mysqladmin extended-status
  • but even better: via kmysqladmin and kmysql

IV.109.3 Creating databases and tables

Creating a database means creating a directory under the "mySLQ data directory" to hold the tables. Various ways exist:
  • "mysql -h localhost -u root" to connect, then enter "CREATE DATABASE db_name" commands
  • you can also use mysqladmin to create databases
  • it should be possible via JDBC as well
  • "show databases;"
  • "select database();" tells you what database is currently selected
  • "GRANT ALL ON menagerie.* TO your_sql_name;"
  • "CREATE DATBASE menagerie"
  • "USE menagerie"
  • "SHOW TABLES"
  • "CREATE TABLE pet (name varchar(20), ...);"
  • "DESCRIBE pet;" shows you the structure
  • "INSERT INTO pet VALUES (...);" to manually insert a record at a time
  • "LOAD DATA ...;" to load from an ASCII text file
  • "SELECT what FROM table WHERE conditions"

IV.109.4 Batch mode (scripts)

Via "mysql -h localhost -u root < script". You can also "... | more" or "... > output.txt".

IV.109.5 Via JDBC

MM.MySQL driver(apparently version 1.2c) downloaded via www.mysql.com (an alternative seems to be via GNU). Downloaded and unpacked in /Java55mysqldriver. Sample programs downloaded in /Java90Samples/JDBC2. Results in "no suitable driver".

MM.MySQL 2.04 states: requirements: any JVM supporting JDBC-1.2 or JDBC-2.0. What am I using??? and also: MySQL protocol 9 or 10. What am I using???

Ant

Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.

Installing Ant

Downloaded from jakarta.apache.org. Untar installs Ant in e.g. "/jakarta-ant-1.4.1". Ant requires a JAXP-compliant XML parser. The binary version of Ant includes the Apache Crimson parser. Ant (binary version) consists of /bin, /lib and /docs.

Preparing to run Ant

To run Ant, you need to:
  • execute "export ANT_HOME=/jakarta-ant-1.4.1" (or whatever directory ant resides in)
  • execute "which java" to determine java's home (e.g. /usr/lib/java/bin/java - this should be linked to e.g. /jdk1.3.1_01)
  • execute "export JAVA_HOME=/usr/lib/java" (note that you cut-off the last /bin/java - appended automatically)
  • execute "export PATH=${PATH}:${ANT_HOME}/bin" (this appends the ant bin directory to your path)
  • I DID PUT THIS IN /root/.bash_profile
  • Remember you can use "env" to display your environment variables.

Ant basics

Each build.xml file contains one project. Each project has three attributes:
  • name: the project name
  • default: the default target to make when no target is given
  • basedir: the base-dir from where all path calculations are done
Each project has one or more targets, for which tasks are executed.

Running Ant

Just "ant". By default Ant will look for a "build.xml" file. If not found at the level of the working directory, Ant will search in higher directories. You can also specify "-find". And "-verbose", which is very helpful.

Poseidon UML

Installing Poseidon

Download from www.gentleware.com . Install in /poseidon1.3 (no good under Kassandra's subdirs). Tinker a bit with /poseidon1.3/bin/startPoseidon.sh . Hardcode the classpath, make sure the right ".:/" is there (. for current, : to concat, and / to start the classpath dirs with). I used the following classpath def: CLASSPATH=.:/poseidon1.3/lib/poseidon.jar CLASSPATH=$CLASSPATH:/poseidon1.3/lib/docs.jar CLASSPATH=$CLASSPATH:$HOME/temp

Maven (version 3)

Installation on GrayTiger

Initial status

Running 'maven' on GrayTiger returns 'not found'. Synaptic:
  • libapache-pom-java - 'Maven metadata for all Apache projects'
  • libcommons-parent-data - 'Maven metadata for Apache Commons project'
Conclusion: maven itself does not seem to be installed by default.

Nevertheless there are libapache-pom-java files installed:
  • /usr/share/doc/libapache-pom-java
  • /usr/share/doc/libapache-pom-java/changelog.Debian.gz
  • /usr/share/doc/libapache-pom-java/copyright
  • /usr/share/maven-repo
  • /usr/share/maven-repo/org
  • /usr/share/maven-repo/org/apache
  • /usr/share/maven-repo/org/apache/apache
  • /usr/share/maven-repo/org/apache/apache.
  • /usr/share/maven-repo/org/apache/apache./18
  • /usr/share/maven-repo/org/apache/apache./18/apache.-18-site.xml
  • /usr/share/maven-repo/org/apache/apache./18/apache.-18.pom
  • /usr/share/maven-repo/org/apache/apache./debian
  • /usr/share/maven-repo/org/apache/apache./debian/apache.-debian-site.xml
  • /usr/share/maven-repo/org/apache/apache./debian/apache.-debian.pom
  • /usr/share/maven-repo/org/apache/apache/18
  • /usr/share/maven-repo/org/apache/apache/18/apache-18.pom
  • /usr/share/maven-repo/org/apache/apache/debian
  • /usr/share/maven-repo/org/apache/apache/debian/apache-debian.pom

Installation

Download from https://maven.apache.org/ into /Downloads/mvn.

Installation: https://maven.apache.org/install.html
  • Use your preferred archive extraction tool => extracted into /home/marc/Downloads/mvn/
  • Add the bin directory of the created directory /home/marc/Downloads/mvn/apache-maven-3.8.6/bin to the PATH environment variable
    • Set the path in the startup file (~/.bashrc) as 'PATH=/home/marc/Downloads/mvn/apache-maven-3.8.6/bin:$PATH'
  • Confirm with mvn -v in a new shell.
Returns: Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63) Maven home: /home/marc/Downloads/mvn/apache-maven-3.8.6 Java version: 17.0.4, vendor: Debian, runtime: /usr/lib/jvm/java-17-openjdk-amd64 Default locale: en_GB, platform encoding: UTF-8 OS name: "linux", version: "5.18.0-0.deb11.3-amd64", arch: "amd64", family: "unix"

Observation: maven uses the default platform java-17, while IntelliJ/IDEA uses it's own java-18 in /.jdks. This seems error-prone.

As per https://www.jetbrains.com/help/idea/maven-support.html, IDEA already include Maven. Conclusion: uninstall the downloaded maven, use the one within IDEA.
  • Deleted /home/marc/Downloads/mvn/
  • Removed the path in the startup file (~/.bashrc)
Observation: the EU DSS installation which should be done with command 'maven clean install' now fails because native maven was removed.

So
  • how to use maven under IDEA?
  • or alternatively: use the IDEA maven via CLI, by adding its path to PATH ===> DONE

Maven's functionality

General introduction
See https://maven.apache.org/guides/index.html
Configuration
There is a settings.xml file in /home/marc/Downloads/IDEA/idea-IC-222.3345.118/plugins/maven/lib/maven3/conf. It contains e.g. specifications for local repository, servers, keys, ...

There's also /home/marc/.m2 with a local repository where you find some EU DSS material such as jar files, pom files, pointers to remote repos...Regarding dependencies: When you download a dependency from a remote Maven repository, Maven stores a copy of the dependency in your local repository.
Plugins
Maven is a plugin execution framework; all work is done by plugins, described at https://maven.apache.org/plugins/index.html.

A plugin goal is called a Mojo (maven plain old java object).
Help plugin
Usage: mvn help:describe -Dplugin=help where help:describe is the goal, and -D defines a property (that we want the description of the help plugin). Alternatively mvn help:describe -Dplugin=help -Dfull. Also: for more information, run 'mvn help:describe [...] -Ddetail'.

The plugin help has 8 goals (mojos):
  1. help:active-profiles - Displays a list of the profiles which are currently active for this build.
  2. help:all-profiles - Description: Displays a list of available profiles under the current project. Note: it will list all profiles for a project. If a profile comes up with a status inactive then there might be a need to set profile activation switches/property.
  3. help:describe - Description: Displays a list of the attributes for a Maven Plugin and/or goals (aka Mojo - Maven plain Old Java Object).
  4. help:effective-pom - Description: Displays the effective POM as an XML for this build, with the active profiles factored in, or a specified artifact. If verbose, a comment is added to each XML element describing the origin of the line.
  5. help:effective-settings - Description: Displays the calculated settings as XML for this project, given any profile enhancement and the inheritance of the global settings into the user-level settings.
  6. help:evaluate - Description: Evaluates Maven expressions given by the user in an interactive mode.
  7. help:help - Description: Display help information on maven-help-plugin. Call mvn help:help -Ddetail=true -Dgoal= to display parameter details.
  8. help:system - Description: Displays a list of the platform details like system properties and environment variables.
Other plugins
Further there are the build and the reporting plugins:
  • Build plugins will be executed during the build and they should be configured in the < build/ > element from the POM.
  • Reporting plugins will be executed during the site generation and they should be configured in the < reporting/ > element from the POM. Because the result of a Reporting plugin is part of the generated site, Reporting plugins should be both internationalized and localized.
Nutshell
There are three lifecycles (clean, default and site). They consist of phases, which have bindings to plugin goals.

To execute: mvn [ options ] [ < goal(s) > ] [ < phase(s) > ]. Somewhat counter-intuitive: first goal(s), then phase(s)...

Tools such as Nexus allow to search the contents of a repository based on the pom.
Project creation through Archtypes
An archetype is a template of a project which is combined with user input to produce a working Maven project tailored to the user's requirements.

Creation of a simple project mvn archetype:create -DgroupId=org.sonatype.mavenbook.ch03 -DartifactId=simple -DpackageName=org.sonatype.mavenbook where archetype:create is the goal.

To build the project do run mvn install from the directory that contains the pom.xml.

To execute do java -cp target/simple-1.0-SNAPSHOT.jar org.sonatype.mavenbook.App.

Maven provides several Archetype artifacts:
  • maven-archetype-archetype - to generate a sample archetype project.
  • maven-archetype-j2ee-simple - to generate a simplifed sample J2EE application.
  • maven-archetype-mojo - to generate a sample a sample Maven plugin.
  • maven-archetype-plugin - to generate a sample Maven plugin.
  • maven-archetype-plugin-site - to generate a sample Maven plugin site.
  • maven-archetype-portlet - to generate a sample JSR-268 Portlet.
  • maven-archetype-quickstart - to generate a sample Maven project.
  • maven-archetype-simple - to generate a simple Maven project.
  • maven-archetype-site - to generate a sample Maven site which demonstrates some of the supported document types like APT, XDoc, and FML and demonstrates how to i18n your site.
  • maven-archetype-site-simple - to generate a sample Maven site.
  • maven-archetype-webapp - to generate a sample Maven Webapp project.

Usage

Directory layout
Maven uses a standardised directory layout: https://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html

To see which maven, which version etc: mvn -v

To execute: mvn [ options ] [ < goal(s) > ] [ < phase(s) > ]
options
./mvn -h Options:
  • -am,--also-make If project list is specified, also build projects required by the list
  • -amd,--also-make-dependents If project list is specified, also build projects that depend on projects on the list
  • -B,--batch-mode Run in non-interactive (batch) mode (disables output color)
  • -b,--builder The id of the build strategy to use
  • -C,--strict-checksums Fail the build if checksums don't match
  • -c,--lax-checksums Warn if checksums don't match
  • -cpu,--check-plugin-updates Ineffective, only kept for backward compatibility
  • -D,--define Define a system property
  • -e,--errors Produce execution error messages
  • -emp,--encrypt-master-password Encrypt master security password
  • -ep,--encrypt-password Encrypt server password
  • -f,--file Force the use of an alternate POM file (or directory with pom.xml)
  • -fae,--fail-at-end Only fail the build afterwards; allow all non-impacted builds to continue
  • -ff,--fail-fast Stop at first failure in reactorized builds
  • -fn,--fail-never NEVER fail the build, regardless of project result
  • -gs,--global-settings Alternate path for the global settings file
  • -gt,--global-toolchains Alternate path for the global toolchains file
  • -h,--help Display help information
  • -l,--log-file Log file where all build output will go (disables output color)
  • -llr,--legacy-local-repository Use Maven 2 Legacy Local Repository behaviour, ie no use of _remote.repositories. Can also be activated by using -Dmaven.legacyLocalRepo=true
  • -N,--non-recursive Do not recurse into sub-projects
  • -npr,--no-plugin-registry Ineffective, only kept for backward compatibility
  • -npu,--no-plugin-updates Ineffective, only kept for backward compatibility
  • -nsu,--no-snapshot-updates Suppress SNAPSHOT updates
  • -ntp,--no-transfer-progress Do not display transfer progress when downloading or uploading
  • -o,--offline Work offline
  • -P,--activate-profiles Comma-delimited list of profiles to activate
  • -pl,--projects Comma-delimited list of specified reactor projects to build instead of all projects. A project can be specified by [groupId]:artifactId or by its relative path
  • -q,--quiet Quiet output - only show errors
  • -rf,--resume-from Resume reactor from specified project
  • -s,--settings Alternate path for the user settings file
  • -t,--toolchains Alternate path for the user toolchains file
  • -T,--threads Thread count, for instance 2.0C where C is core multiplied
  • -U,--update-snapshots Forces a check for missing releases and updated snapshots on remote repositories
  • -up,--update-plugins Ineffective, only kept for backward compatibility
  • -v,--version Display version information
  • -V,--show-version Display version information WITHOUT stopping build
  • MLS: -X seems to produce full debug info (not documented?)
Lifecycles, their phases, goals
The Maven lifecycle is defined by the components.xml file in the maven-core module, see https://maven.apache.org/ref/3.8.6/maven-core/lifecycles.html.

Default lifecycle bindings are defined in a separate default-bindings.xml descriptor.

Built-in lifecycles are:
  • clean
  • default
  • site
The lifecycles and their phases are:
  • clean - pre-clean, clean, post-clean (so there's both a lifecycle and a phase called clean)
  • default - validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy
  • site - pre-site, site, post-site, site-deploy
Phases are mapped to underlying goals. The specific goals executed per phase is dependant upon the packaging type of the project. For example, package executes jar:jar if the project type is a JAR, and war:war if the project type is WAR.

Some phases have goals bound to them by default. And for the default lifecycle, these bindings depend on the packaging value. Here are some of the goal-to-build-phase bindings.

Lifecycle: 'Clean', bindings of Phase 'clean' => plugin goal: clean:clean

Lifecycle: 'Default', bindings of Phase 'package' => plugin goals: ejb:ejb or ejb3:ejb3 or jar:jar or par:par or rar:rar or war:war

Execution
Just creating the package and installing it in the local repository for re-use from other projects can be done with mvn verify ('verify' is a phase in the default lifecycle)

A fresh build of a project generating all packaged outputs and the documentation site and deploying it to a repository manager could be done with mvn clean deploy site-deploy ('clean' is a phase in the clean lifecycle, 'deploy' is a phase in the default lifecycle, 'site-deploy' is a phase in the site lifecycle).

The Project Object Model (POM) is the basic unit of work in Maven. It contains
  • project : top-level element in all Maven pom.xml files.
  • modelVersion : what version of the object model this POM is using. The version of the model itself changes very infrequently but it is mandatory in order to ensure stability of use if and when the Maven developers deem it necessary to change the model.
  • groupId : the unique identifier of the organization or group that created the project. The groupId is one of the key identifiers of a project and is typically based on the fully qualified domain name of your organization. For example org.apache.maven.plugins is the designated groupId for all Maven plugins.
  • artifactId : the unique base name of the primary artifact being generated by this project. The primary artifact for a project is typically a JAR file. Secondary artifacts like source bundles also use the artifactId as part of their final name. A typical artifact produced by Maven would have the form -. (for example, myapp-1.0.jar).
  • version : the version of the artifact generated by the project. Maven goes a long way to help you with version management and you will often see the SNAPSHOT designator in a version, which indicates that a project is in a state of development. We will discuss the use of snapshots and how they work further on in this guide.
  • name : the display name used for the project. This is often used in Maven's generated documentation.
  • url : where the project's site can be found. This is often used in Maven's generated documentation.
  • properties : value placeholders accessible anywhere within a POM.
  • dependencies : this element's children list dependencies. The cornerstone of the POM.
  • build : this element handles things like declaring your project's directory structure and managing plugins.
For a complete reference refer to https://maven.apache.org/ref/3.8.6/maven-model/maven.html

Netbeans

Tried Forte/Sun One Studio - but this only works on Sun Linux or Red Hat. Gave up and switched to www.netbeans.org - download .tar.zip executable. Unpack. Start with '/netbeans/bin/runide.sh -jdkhome /usr/lib/java' .

Node.js, nvm and npm

At a glance:
  • Node.js is an asynchronous event-driven JavaScript runtime (jsre)
  • nvm (Node Version Manager) allows to download and install Node.js (in multiple versions if you want)
  • npm (Node Package Manager) allows to install javascript packages (npm comes with Node.js so if you have node installed you most likely have npm installed as well)

Install on DebbyBuster

nvm on DebbyBuster

See e.g. https://tecadmin.net/how-to-install-nvm-on-debian-10/

First install nvm (which allows you to select the version of Node.js you want, even multiples): 'curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash'

Sample use:
  • nvm --version lists the version of nvm that is installed (e.g. 0.38.0)
  • nvm ls List available versions
    • apparently DebbyBuster has version 8 - 14 available, the default is 8
  • nvm install 8.0.0 Install a specific version number (of Node.js - implicit)
  • nvm use 8.0 Use the latest available 8.0.x release
  • nvm use node Use the latest version
  • nvm run 6.10.3 app.js Run app.js using node 6.10.3
  • nvm ls-remote List available versions remotely
  • nvm exec 4.8.3 node app.js Run `node app.js` with the PATH pointing to node 4.8.3
  • nvm alias default 8.1.0 Set default node version on a shell
  • nvm alias default node Always default to the latest available node version on a shell
  • nvm install node Install the latest available version
  • nvm install --lts Install the latest LTS version
  • nvm use --lts Use the latest LTS version

npm on DebbyBuster

Tells me: 'npm does not support Node.js v8.0.0' - upgrade. AWS-CDK needs 10.13.0 or later (but not 13.*). So 'nvm install 10.13.0'. OK.

Install on Kali

Run apt-get install nodejs. Msgs: installs libnode64 and nodejs-doc. Suggests to install npm.

Run apt-get install npm. Msgs: installs 250 packages, whose name is starting with node-. Tries to get them from ftp.belnet.be/pub/kali kali-rolling/main 'node-name'. Sometimes fails. Fixed following the apt feedback.

What and where

In /usr/lib/nodejs you find all modules plus npm. Use apt show nodejs informs you ca-certificates and nodejs-doc are recommended to be installed. Use apt show nodejs-doc informs you this contains documentation - but WHERE is this stored?

Nodeclipse

See 'https://nodeclipse.github.io'. Rather complicated, drop.

Visual Studio Code - VSCode

Install on GrayTiger

See code.visualstudio.com, where you can find a .deb file, https://code.visualstudio.com/docs/?dv=linux64_deb

Download .deb file into /Downloads/vs_code. Then apt install ./code_1.41_etc.
sudo apt install ./code_1.83.1-1696982868_amd64.deb 
[sudo] password for marc: 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'code' instead of './code_1.83.1-1696982868_amd64.deb'
The following NEW packages will be installed:
  code
0 upgraded, 1 newly installed, 0 to remove and 322 not upgraded.
Need to get 0 B/95.7 MB of archives.
After this operation, 386 MB of additional disk space will be used.
Get:1 /home/marc/Downloads/vs_code/code_1.83.1-1696982868_amd64.deb code amd64 1.83.1-1696982868 [95.7 MB]
Selecting previously unselected package code.
(Reading database ... 628885 files and directories currently installed.)
Preparing to unpack .../code_1.83.1-1696982868_amd64.deb ...
Unpacking code (1.83.1-1696982868) ...
Setting up code (1.83.1-1696982868) ...
Processing triggers for gnome-menus (3.36.0-1) ...
Processing triggers for shared-mime-info (2.0-1) ...
Processing triggers for mailcap (3.69) ...
Processing triggers for desktop-file-utils (0.26-1) ...
Then there's a desktop gnome item to start it.

Install additional tools on GrayTiger

See https://code.visualstudio.com/docs/setup/additional-components : git, nodejs, typescript (not done yet).

You can extend the VS Code editor through extensions, available on the VS Code Marketplace, https://code.visualstudio.com/docs/editor/extension-marketplace

Additional tools available include:
  • Yeoman - An application scaffolding tool, a command line version of File > New Project.
  • generator-hottowel - A Yeoman generator for quickly creating AngularJS applications.
  • Express - An application framework for Node.js applications using the Pug template engine.
  • Gulp - A streaming task runner system which integrates easily with VS Code tasks.
  • Mocha - A JavaScript test framework that runs on Node.js.
  • Yarn - A dependency manager and alternative to npm.
Most of these tools require Node.js and the npm package manager to install and use.

User interface

See https://code.visualstudio.com/docs/getstarted/userinterface

Settings

See https://code.visualstudio.com/docs/getstarted/settings

Rust in VS Code

See https://code.visualstudio.com/docs/languages/rust.

Setting up and using Rust within Visual Studio Code, with the rust-analyzer extension.

Step 1: install Rust using Rustup, as described at Rustup installation.

Install on DebbyBuster

See code.visualstudio.com, where you can find a .deb file. On DebbyBuster, download .deb file into /Downloads. Then apt install ./code_1.41_etc. See the VS Code node.js tutorial. Steps:
  • nvm install --lts (to run with the latest long term support version of nvm - otherwise VS Code/execution complains)
  • create HelloWorldApp.js using VSCode
  • cd /home/marc/VS-Code/HelloWorld
  • node HelloWorldApp.js (executes)
  • alternatively, execute within VSCode
    • open a terminal and 'node HelloWorldApp.js'
    • 'run' - but this insists on using an outdated version of node.js and fails - where is this configured? - in /.vscode/launch.json file - use "runtimeVersion": "14.17.4"
So it is useful to have
  • jsconfig.json - the presence of a jsconfig.json file in a directory indicates that the directory is the root of a JavaScript project - see https://code.visualstudio.com/docs/languages/jsconfig
  • launch.json - described at https://github.com/microsoft/vscode-js-debug/blob/main/OPTIONS.md
Minimal jsconfig.json file:
{
  "compilerOptions": {
    "module": "commonjs",
    "target": "es6"
  },
  "exclude": ["node_modules"]
}

Minimal launch.json file:
{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
               {
            "type": "pwa-node",
            "runtimeVersion": "14.17.4", // 
            "request": "launch",
            "name": "Launch Program",
            "skipFiles": [
                "/**"
            ],
            "program": "${workspaceFolder}/HelloWorldApp.js"
        }
    ]
}

Extend with 'Express'

See https://expressjs.com/, and ITDEV information. Express is an application framework for building and running Node.js applications. You can scaffold (create) a new Express application using the Express Generator tool. The Express Generator is shipped as an NPM module and installed as 'npm install -g express-generator'.

Scaffold a new Express application 'myExpressApp' by 'express myExpressApp'.

Result:
  • create : myExpressApp/ --- contains the app.js application
  • create : myExpressApp/public/ --- public code and data etc - static content
  • create : myExpressApp/public/javascripts/
  • create : myExpressApp/public/images/
  • create : myExpressApp/public/stylesheets/
  • create : myExpressApp/public/stylesheets/style.css
  • create : myExpressApp/routes/ ---how an application responds to a client request to a particular endpoint, which is a URI (or path) and a specific HTTP request method (GET, POST, and so on), for details refer to https://expressjs.com/en/guide/routing.html.
  • create : myExpressApp/routes/index.js
  • create : myExpressApp/routes/users.js
  • create : myExpressApp/views/
  • create : myExpressApp/views/error.jade
  • create : myExpressApp/views/index.jade
  • create : myExpressApp/views/layout.jade
  • create : myExpressApp/app.js
  • create : myExpressApp/package.json ---the package information
  • create : myExpressApp/bin/
  • create : myExpressApp/bin/www ---a javascript that creates an http server (not a binary)
Then:
  • change directory: $ cd myExpressApp
  • install dependencies: $ npm install
  • run the app: $ DEBUG=myexpressapp:* npm start

This creates a new folder called myExpressApp with the contents of your application. To install all of the application's dependencies (again shipped as NPM modules), go to the new folder and execute npm install:
  • cd myExpressApp
  • npm install
At this point, we should test that our application runs. The generated Express application has a package.json file which includes a start script to run node ./bin/www. This will start the Node.js application running.

From a terminal in the Express application folder, run 'npm start'. The Node.js web server will start and you can browse to http://localhost:3000 to see the running application.

Use 'npm -ls' to see what is inside your app.

Which leaves the question: what is an 'app'?

Install on Kali

See code.visualstudio.com, where you can find a .deb file. On Kali, download .deb file into /Downloads. Then apt install ./code_1.41_etc. Software in /Sierra/VSCode. Start as code but will request a dedicated data directory if you're working as root. So create /root/Sierra/VSCode, start as code --user-data-dir /root/Sierra/VSCode

Refer to https://code.visualstudio.com/docs/nodejs/nodejs-tutorial. Steps:
  • Install and run the Express Generator to create an Express application structure
  • npm install to install all the dependencies
  • an Express application has a package.json file that includes a start script to run node ./bin/www
  • next do npm start
  • this starts the node.js webserver, and you can visit http://localhost:3000

Visual Studio Code - VSCode for Rust

Install on GrayTiger

IntelliJ IDEA

Refer to IntelliJ on GrayTiger Debian 11 Bullseye

See here.

Git

Basics

Version control system created by Linus Torvald in 2005. As with most other distributed version control systems, every Git directory on every computer is a full-fledged repository with complete history and full version tracking abilities, independent of network access or a central server.

See also

Debian info

Installation of git on Debian

Via Synaptic I see
  • GrayTiger: git and git-man installed - and plenty of packages not installed; 'git --version' yields git version 2.30.2
  • BlackTiger: same; git version 2.20.1
  • Observation in Synaptics: weird messages such as installation of 'git-all' requires removal of virtually all gnome stuff (?).
In general it seems to work.

Github CLI gh

Basics

GitHub CLI (gh) is a command line interface specifically for working with GitHub, makes scripting easier on GitHub. The remote repository may be hosted on GitHub or it may be hosted by another service.

Installation of gh on Debian

See https://github.com/cli/cli/blob/trunk/docs/install_linux.md

Repo for OPTEE

Basics

Repo is a command on top of git. It is an executable Python script that you can put anywhere in your path.
  • homepage at https://gerrit.googlesource.com/git-repo
  • doc at https://source.android.com/docs/setup/create/repo
  • Repo install including chmod command: https://source.android.com/docs/setup/download

Repo installation attempt 1

  • Installation on BlackTiger: 'sudo apt-get install repo' (ok)
  • Result: man page works, repo commands do no, ...
    • In /home/OPTEE there is ./repo / deleted since repo fails to work
    • In / there is .repoconfig with GPG keys
  • Execution: 'repo init -u url ... ' (not ok, fails saying 'command not found, syntax error, line 94')
    • Essentially 'repo' is a python script, main.py, with a wrapper 'repo'. The lines around 94 deal with minimum Python version, and messages such as your Python 3 is too old.
    • Which leads to the question: what Python is BlackTiger running? 'apt show python' reveals 'version 2.7.16-1'. On python.org you find the most recent version Oct 2023 is 3.12, so 2.7 is probably too old.
    • Refer to Python language discussion.
    • Refer to Python installation.

Repo installation attempt 2

  • Installation on BlackTiger: 'sudo apt-get install repo' (ok)
  • Then 'dpkg -L repo' returns
    • /usr/bin/repo - which is a script that checks Python versions, contains 'Init', 'Fetch' and much more
    • /usr/share/bash-completion/completions/repo - not sure what that is
    • /usr/share/doc/repo - minimal info
    • /usr/share/man/man1/repo.1.gz - man page - referring to use in Android source download
    Repo install including chmod command (if needed)]
    • https://source.android.com/docs/setup/download
    • https://source.android.com/docs/setup/download#installing-repo
  • Installation verification: 'repo version', should return something like repo not installed>; repo launcher version 2.15; (from /usr/bin/repo)< OK, try repo init next/li>
  • Next is initialisation, see https://source.android.com/docs/setup/download/downloading#initializing-a-repo-client, this fails
    • Get https: gerrit.googlesource.com/git-repo/clone.bundle
    • Get https: gerrit.googlesource.com/git-repo/
    • File "/home/marc/OPTEE/.repo/repo/main.py", line 94, ) syntax error - invalid syntax
    • Some googling: may have to do with your Python version (what does repo expect??? Possible solution: invoke through python3
      • python3 /usr/bin/repo init -u https://git.gitlab.arm.com/arm-reference-solutions/arm-reference-solutions-manifest.git -m pinned-rdv1.xml -b refs/tags/RD-INFRA-2021.02.24
      • python3 /usr/bin/repo sync -c -j $(nproc) --fetch-submodules --force-sync
    • Trying: 'python3 /usr/bin/repo init' - results in 'please use python 2.7' - 'a new version of repo is available' - 'the launcher is run from /usr/bin/repo, the launcher is not writable' - 'fatal: manifest url is required'
    • Trying: 'python3 /usr/bin/repo help init' - results in 'please use python 2.7' - 'a new version of repo is available' - 'the launcher is run from /usr/bin/repo, the launcher is not writable' - 'fatal: manifest url is required' - followed by the help information on 'init'
  • Trying again since the main error seems to be 'manifest url is required': 'python3 /usr/bin/repo init -u https://github.com/OP-TEE/manifest.git -m qemu_v8.xml' - OK 'repo has been initialised in /home/marc/OPTEE'
  • Then need to sync it: 'python3 /usr/bin/repo sync' - OK results in many subdirs under /home/marc/OPTEE:
    • build, buildroot
    • hafnium
    • linux
    • mbedtls
    • optee_benchmark
    • optee_client
    • optee_examples
    • optee_os
    • optee_rust
    • optee_test
    • qemu
    • trusted-firmware-a
    • u-boot
Sidebar on manual repo installation
  • mkdir -p ~/.bin
  • PATH="${HOME}/.bin:${PATH}"
  • curl https://storage.googleapis.com/git-repo-downloads/repo > ~/.bin/repo
  • chmod a+rx ~/.bin/repo
  • ... where a corresponds to all, i.e. user, group, others - so this would mean give all read and execute rights on repo - but if not writeable - maybe rwx - ok, did so on BlackTiger
  • -

Python and SageMath

For an intro refer to local webinfo.

Python versions

  • Python 2.0 was released in 2000. Python 2.7.18, released in 2020, was the last release of Python 2.
  • Python 3.0, released in 2008, was a major revision not completely backward-compatible with earlier versions.
  • What's on GrayTiger? Synaptics: python2 (version 2.7.18), python3 (3.9.2), and a zillion other packages/libs.
  • What's on BlackTiger? Synaptics: python2 (version 2.7.16) , python3 (version 3.7.3), and a zillion other packages/libs.

Invocation

Python interpreter

Commandprompt, enter 'python' gives you python2, othewise enter 'python3'. There are packages for Jupyter, and for Sagemath.

Jupyter notebook

Synaptics: installed package jupyter (python3), a 'metapackage', which installed many others. Synaptics message: 'W: Download is performed unsandboxed as root as file '/root/.synaptic/tmp//tmp_sh' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)'

CLI: 'jupyter notebook'

Sagemath notebook

Synapthic: package: sagemath-jupyter

What's installed: 'apt show sagemath' or 'dpkg -L sagemath'. Impressive list, starting from

Refer also to LW0200MATHEMATICS.

Invocation:
  • 'sage' in a console starts a Sage session. To quit the session enter quit and then press .
  • To start a Jupyter Notebook instead of a Sage console, run the command 'sage -n jupyter', then select 'New'
    • HELP takes you to tutorials etc
  • For usage refer to LTK-SageMath.

Rust

Basics

For an intro refer to local Rust language info. Tools:
  • rustc is the compiler for the Rust programming language, provided by the project itself, takes your source code and produce binary code, either as a library or executable
  • cargo the package manager and build system
    • To see how cargo calls rustc, you can do cargo build --verbose, and it will print out each rustc invocation.
  • rustup is the installation tool

Rust installation

Intro - Rust's rustup

rustup installs and manages many Rust toolchains and presents them all through a single set of tools installed to ~/.cargo/bin.
  • Rustup
  • rustup is a toolchain multiplexer. It installs and manages many Rust toolchains and presents them all through a single set of tools installed to ~/.cargo/bin.
  • The rustc and cargo executables installed in ~/.cargo/bin are proxies that delegate to the real toolchain.
  • rustup then provides mechanisms to easily change the active toolchain by reconfiguring the behavior of the proxies.
So when rustup is first installed, running rustc will run the proxy in $HOME/.cargo/bin/rustc, which in turn will run the stable compiler. If you later change the default toolchain to nightly with rustup default nightly, then that same proxy will run the nightly compiler instead. This is similar to Ruby's rbenv, Python's pyenv, or Node's nvm.

Intro - the Debian view

Debian ships the Rust compiler (rustc), the Rust package manager and build system (cargo), a number of packages written in Rust or including modules written in Rust, and a growing number of Rust crates (Rust libraries). And then there's debcargo, the official tool for packaging Rust crates to be part of the Debian system. Due to the way Rust and the Rust crates are packaged in Debian, packages shipping Rust crates are only useful when packaging other crates or Rust-based software for Debian. While you can use them for everyday development with Rust, their main use is as dependencies for other applications since Debian doesn't allow downloading source code at build-time.

To only use the local (debian) version of crates, place the following snippet in a .cargo/config file at your projects' root:
  • [source]
  • [source.debian-packages]
  • directory = "/usr/share/cargo/registry"
  • [source.crates-io]
  • replace-with = "debian-packages"
.. and install any of its crate dependencies via 'apt install librust-CRATE-dev' (or declare them in debian/control if it's for a package).

For everyday Rust development, you may find rustup useful, as it provides a convenient way to install Rust toolchains and switch between them globally or on a per-project basis. Unfortunately, Debian currently doesn’t ship rustup as a package.

Installation

  • https://rust-lang.github.io/rustup/ for info
  • https://rustup.rs/ explains to run 'curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh' .
The latter returns:
info: downloading installer
Welcome to Rust!
This will download and install the official compiler for the Rust programming language, and its package manager, Cargo.
  • Rustup toolchains and metadata are installed into the Rustup home directory, located at: /home/marc/.rustup - can be modified with the RUSTUP_HOME environment variable
  • The Cargo home directory is located at: /home/marc/.cargo - can be modified with the CARGO_HOME environment variable
The cargo, rustc, rustup and other commands will be added to Cargo's bin directory, located at: /home/marc/.cargo/bin This path will then be added to your PATH environment variable by modifying the profile files located at /home/marc/.profile and/or /home/marc/.bashrc. You can uninstall at any time with rustup self uninstall and these changes will be reverted. Current installation options: default host triple: x86_64-unknown-linux-gnu default toolchain: stable (default) profile: default modify PATH variable: yes 1) Proceed with installation (default) 2) Customize installation 3) Cancel installation
I went for 1)...
info: profile set to 'default'
info: default host triple is x86_64-unknown-linux-gnu
info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
info: latest update on 2023-10-05, rust version 1.73.0 (cc66ad468 2023-10-03)
info: downloading component 'cargo'
  7.8 MiB /   7.8 MiB (100 %)   2.3 MiB/s in  3s ETA:  0s
info: downloading component 'clippy'
info: downloading component 'rust-docs'
 13.8 MiB /  13.8 MiB (100 %)   3.2 MiB/s in  4s ETA:  0s
info: downloading component 'rust-std'
 24.7 MiB /  24.7 MiB (100 %)   2.2 MiB/s in 11s ETA:  0s
info: downloading component 'rustc'
 61.6 MiB /  61.6 MiB (100 %)   2.5 MiB/s in 25s ETA:  0s
info: downloading component 'rustfmt'
info: installing component 'cargo'
info: installing component 'clippy'
info: installing component 'rust-docs'
 13.8 MiB /  13.8 MiB (100 %)   8.0 MiB/s in  1s ETA:  0s
info: installing component 'rust-std'
 24.7 MiB /  24.7 MiB (100 %)  17.2 MiB/s in  1s ETA:  0s
info: installing component 'rustc'
 61.6 MiB /  61.6 MiB (100 %)  19.1 MiB/s in  3s ETA:  0s
info: installing component 'rustfmt'
info: default toolchain set to 'stable-x86_64-unknown-linux-gnu'

stable-x86_64-unknown-linux-gnu installed - rustc 1.73.0 (cc66ad468 2023-10-03)

Rust is installed now. Great!

To get started you may need to restart your current shell. 
This would reload your PATH environment variable to include Cargo's bin directory ($HOME/.cargo/bin).

To configure your current shell, run: source "$HOME/.cargo/env"   // 'help source' : Execute commands from a file in the current shell.
What's my path? Use 'echo $PATH'. After restarting the shell, /.cargo is on the path, nice. Now I can invoke e.g. rustc.

Installation outcome

GrayTiger
BlackTiger

Omid's SP-MAC-EQ in Rust

Round 1 on GrayTiger

At marc@GrayTiger:~/OPTEE_GT/optee_rust/examples/SP-MAC-EQ$ run cargo as follows:
cargo build --verbose
info: syncing channel updates for 'nightly-2021-09-20-x86_64-unknown-linux-gnu'
info: latest update on 2021-09-20, rust version 1.57.0-nightly (5ecc8ad84 2021-09-19)
info: downloading component 'cargo'
info: downloading component 'clippy'
info: downloading component 'rust-docs'
info: downloading component 'rust-std'
info: downloading component 'rustc'
info: downloading component 'rustfmt'
info: installing component 'cargo'
info: installing component 'clippy'
info: installing component 'rust-docs'
info: installing component 'rust-std'
info: installing component 'rustc'
info: installing component 'rustfmt'
    Updating crates.io index
error: failed to select a version for the requirement `tonic = "^0.10.0"`
candidate versions found which didn't match: 0.8.3, 0.8.2, 0.8.1, ...
location searched: crates.io index
required by package `SP-MAC-EQ v0.1.0 (/home/marc/OPTEE_GT/optee_rust/examples/SP-MAC-EQ)`
Analysis:
  • The failed requirement for tonic is stated in the 'Cargo.toml' file under dependencies. Tonic is an implementation of the gRPC framework, see https://crates.io/crates/tonic and https://grpc.io/.
  • Feedback from Cargo includes 'location searched: crates.io index'
  • At https://crates.io/crates/tonic/versions you find there have so far been 38 versions of Tonic, with the most recent being 0.10.2, from September 28, 2023
  • Tonic version 0.10.0 is available at crates.io, and dates back to September 1, 2023
  • So why does this fail?
  • There's also 'candidate versions found which didn't match: 0.8.3, 0.8.2, 0.8.1, ...' - why, what version do we actually need??? All these 0.8.* versions are also listed at crates.io...
  • According to the doc you can find detailed info in file 'Cargo.lock' - however, in /SP-MAC-EQ there is no such file (plenty of such files in other dirs)
  • According to https://blog.illixion.com/2021/10/fix-failed-to-select-a-version-cargo/, 'Apparently, Cargo can sometimes get into a state where its local registry cache will corrupt itself, and Cargo won’t be able to discover new versions of the dependencies. To resolve this, delete the ~/.cargo/registry folder and run the build command again.'
Analysis 2:
  • Consider https://wiki.debian.org/Rust, which has a statement about 'To only use the local (Debian) version of crates, place the following snippet in a .cargo/config file at your projects' root: ' ... '.. and install any of its crate dependencies via apt install librust-CRATE-dev (or declare them in debian/control if it's for a package). '
  • So:
    • What local Debian versions of crates are already in place?
      • List: Synaptics shows many librust-... libraries are available, including librust-crates-io-dev, librust-cargo-..., ... XXXXXXXXXXXXXXXXXXXX
    • Do I have 'librust-CRATE-dev ', or can I do 'apt install librust-CRATE-dev'?
/home/marc/OPTEE_GT/optee_rust/.cargo/config

X-server and Wayland

BlackTiger - Debian 10 Buster

First there was the X11 display server, which was succeeded by Wayland 'as default' (sic), e.g. in Debian as of Debian 10, Buster.

How do you find out what you are running, X11 or Wayland?
  • Approach 1: 'echo $XDG_SESSION_TYPE' ---> returns 'X11'
  • Approach 2: 'loginctl' returns , then 'loginctl show-session -p Type' ---> returns 'X11'
Further: 'systemctl' reveals that LDM is running. This is confusing:
  • ldm is the X11 Light Display Manager
    • there are config files on BlackTiger in /etc/lightdm: keys.conf, lightdm.conf (barebones), lightdm-gtk-greeter.conf, users.conf
  • ldm is the LTSP display manager (Linux Terminal Server Project), an X11 display manager (similar to xdm, gdm and kdm), but unlike those it wraps the X11 traffic within an SSH tunnel to provide a secure login mechanism for remote X sessions.

Wayland

If I do 'apt list --installed | apt-list-installed-20210727.txt' I learn that I have both X11 and Xwayland installed.

X11

Remember :
  • X, the so-called X-server (e.g. XF86 or MetroX) is in fact the display and keyboard server;
  • the window manager (e.g. fvwm1, fvwm2, fvwm95, olwm, mwm (motif or lesstif), kde's wm) is a special client that draws the windows for the other Xclients;
  • Xclients are all these Xapplications.
What does rpm tell me about X? Go into glint (the package manager), and query under X11. You'll find some packages like fvwm, and the query will show you all the files (executables, definitions of resources, man pages, ...).

Customization of XF86, the X-server

What have we got to start with in case of Red Hat?

Obviously, in case of SuSE, use SaX. Well, you run with a XF86Config file that you defined when first installing X. Run 'SuperProbe' to find out the very basics, even before X is willing to start.

Run 'X -probeonly' and 'X -showconfig' to find out the basic parameters of the current set-up. X -probeonly : 'SVGA, chipset clgd5436, videoram 1024K, clocks 25.23 .. 135.00, mode : 640x480, no mode def named 800x600' X -showconfig : 'XFree86 3.1.2 / X Windows System V.11, revision 0, vendor release 6000, configured drivers : SVGA for 8 bit colour SVGA ...clgd5436...generic...

Now how do I tailor my resolution? Via XF86Config, read on.

V.102.2 Initial customization via .Xclients

Created a .Xclients in my home directory of root (/root/.Xclients). If there is already an existing .Xclients file, save this as .Xclients.original. I'm sure you know you can verify the existence of . files via 'ls -a'. In my customized .Xclients file, I specified a minimal set-up of clients, and I start fvwm (rather than fvwm5).

V.102.3 Desktop customization via system.fvwmrc

Further desktop customization is carried out via /etc/X11/fvwm/system.fvwmrc . Here you set-up the pager, the colors, the menu items... .

Summary of configarable items

Part 1 : X - the display & keyboard server

CI 1 : the X server program
In fact 'X' (or rather /usr/X11R6/bin/X) is a symbolic link to the actual server program, e.g. /usr/X11R6/XF86_SVGA. This link is build via a 'ln' command. You can run SuperProbe to determine the setting of this link.

CI 2 : the X server configuration file
In pre-SAX systems, basic X configuration information went into /usr/X11R6/lib/X11/XF86Config. Now it seems to go into /etc/X11/XF86Config. Here you find the various sections :

Section "Files"
Section "ServerFlags"
Section "Keyboard"
Section "Pointer"
Section "Monitor" ... identifier - modes - modelines (documented in /usr/X11R6/lib/X11/doc) Section "Device" (linked to the chipset) Section "Screen" ... here we define Driver (the X server, e.g. SVGA), Device (cfr supra), Monitor (cfr supra), and a SubSection "Display" including the resolutions "1024x768"...

===> The easiest way to define this XF86Config file is by running the xf86config program.
Manually adjusting the contents of this file is, euh, very hard.

===> Your keyboard can be redefined through the XF86Config file. This might lead to problems with AZERTY keyboards etc. In case of doubt, disable these keyboad extensions in the Section "Keyboard".

===> Once you have e.g. three resolutions defined, you can toggle between them using cntl-alt-numkeypad minus/plus.

===> If your Xserver hangs, you can use Cntl-Alt-Backspace to kill it.
  Part 2 : fvwm - the window manager

CI 3 : /etc/X11/fvwm/system.fvwmrc
Overall window manager & desktop settings. Here you call the executable program from the menu option.

CI 4 /root/.Xclients :
--- use the xsetroot command to set the root window ---
 

Part 3 : individual application settings

... to be further elaborated ...
 

SuSe and SaX

SuSe's SaX will write your XF86Config into /etc/X11/XF86Config. So if you want to reuse the pre-toothbrush XF86Config, I guess you'll have to write it there... For some odd reason, the old XF86Config file does not seem to work. Fortunately SaX is pretty good. Running "startx" will create server logfiles in "/root/Serverlog".

You can use commands such as "xset q" to find out about settings.

KDE

Reinstall via 'sudo apt-get install kubuntu-desktop'

X goodies

Try: xinfo, xkill, xosview, xnetload, xgrab(sc), xwd, ...

Remote desktop - VNC and Krdc

Server

Virtual Network Computing (VNC) mirrors the desktop of a remote ("server") computer on your local ("client") computer (it is not a separate remote login, as is XDMCP). A user on the remote desktop must be logged in and running a VNC server (such as X11VNC, Vino, or Krfb). Keyboard and mouse events are transmitted between the two computers. VNC is platform-independent —- a VNC viewer on one operating system can usually connect to a VNC server on any other operating system. (Windows users can use one of several clients such as UltraVNC Viewer.) Krdc is the default VNC client in Kubuntu/KDE. It can be used for both VNC and RDP connections.
  • X11VNC is common under KDE, Vino is the default Gnome server
  • install via sudo apt-get install x11vnc
  • start server without password (not recommended) x11vnc -forever -rfbport 5900
  • better to (1) create password:
    • mkdir ~/.vnc
    • x11vnc -storepasswd YOUR_PASSWORD ~/.vnc/x11vnc.pass
      Then start server using the password
    • x11vnc -forever -rfbport 5900 -rfbauth ~/.vnc/x11vnc.pass -o ~/.vnc/x11vnc.log -loopbg -display :0
    • put "/usr/bin/x11vnc -forever -rfbport 5900 -rfbauth ~/.vnc/x11vnc.pass -o ~/.vnc/x11vnc.log -loopbg -display :0" in X11VNCstartscript

Client

Krdc is the default VNC client in Kubuntu/KDE. It can be used for both VNC and RDP connections.
  • install via sudo apt-get install krdc
  • use via icon or invoke as krdc vnc://
  • connecting to a Windows server: krdc rdp://: (mind you: 'rdp')

Audio, sound, music

Refer also to IT Linux audio introduction

AV Linux

On dedicated machine.

Clevo Debbybuster

See Debian settings - Sound to verify the sound harware is detected. The soundcard is the device that handles input and output of audio.

Refer to sound hardware commands for GrayTiger hw info.

Mentions ALC293. Probably from Realtek, a Taiwanese manufacturer, https://en.wikipedia.org/wiki/Realtek, the Realtek website has no info on ALC293...

Realtek's Audio Solutions are based on Avance Logic technology, which can also be recognized by the prefixes "ALG" (Avance Logic Graphics) and "ALS" (Avance Logic Sound).

ALSA

ALSA intro

Advanced Linux Sound Architecture (ALSA) is a software framework and part of the Linux kernel that provides an application programming interface (API) for sound card device drivers. On Linux, sound servers, like sndio, PulseAudio, JACK, PipeWire, and higher-level APIs (e.g OpenAL, SDL audio, etc.) work on top of ALSA and its sound card device drivers. ALSA succeeded the older Linux port of the Open Sound System (OSS). Refer to ALSA (Wikipedia) and ALSA wiki.

Additional to the software framework internal to the Linux kernel, the ALSA project also provides the command-line tools and utilities

ALSA and hardware

To uniquely identify each piece of audio hardware on a system, ALSA assigns them unique names. Usually, "hw:0" is the name of your soundcard. The various audio programs assume that they will be working with hw:0, but they all provide ways to change this.

To figure out what audio device names have been assigned to which devices:
  • check /proc/asound/cards: 'cat /proc/asound/cards' - on GrayTiger:
    0 [Generic    ]: HDA-Intel - HD-Audio Generic
                     HD-Audio Generic at 0xd04c8000 irq 76
    1 [Generic_1  ]: HDA-Intel - HD-Audio Generic
                     HD-Audio Generic at 0xd04c0000 irq 77
    
    • The numbers to the left indicate the card number. This means hw:0 and hw:1 are GrayTiger's the ALSA device names.

  • But this doesn't tell the whole story. There may be multiple devices per card, aplay -l shows:
  • **** List of PLAYBACK Hardware Devices ****
    card 0: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0]
      Subdevices: 1/1
      Subdevice #0: subdevice #0
    card 0: Generic [HD-Audio Generic], device 7: HDMI 1 [HDMI 1]
      Subdevices: 1/1
      Subdevice #0: subdevice #0
    card 1: Generic_1 [HD-Audio Generic], device 0: ALC293 Analog [ALC293 Analog]
      Subdevices: 0/1
      Subdevice #0: subdevice #0
    
    • card 0:device 3: HDMI 0 ---> hw:0,3
    • card 0:device 7: HDMI 1 ---> hw:0,7
    • card 1:device 0: ALC293 Analog ---> hw:1,0
    • Note that there is also a subdevice level. It appears that the general form is hw:card,device,subdevice. If you leave subdevice or device off, it assumes 0.
  • So on GrayTiger: try hw:1,0 first
    • sox -b 16 -n test.wav rate 44100 channels 2 synth 1 sine 440 gain -1
    • aplay -D hw:1 test.wav
    • ---> 'aplay: main:830: audio open error: Device or resource busy' ---> which means someone else (like pulseaudio) is using the soundcard. In that case "-D pulse" might work.
    • ---> 'echo "suspend 1" | pacmd' ---> suspends pulseaudio
    • aplay -D hw:1 test.wav
    • ---> ok, works
    • ---> 'echo "suspend 0" | pacmd' ---> unsuspends pulseaudio

ALSA config files

ALSA utilities

  • alsamixer, an ncurses-version of amixer, shows you what you've got available
  • alsactl is a way to save settings for your device (?)
  • amixer is a command line app which allows adjustments to be made to a devices volume and sound controls
  • The aconnect and aseqview applications are for making MIDI connections and viewing the list of connected ports.
    • aconnect connects/disconnect two existing ports on the ALSA sequencer system. The ports with the arbitrary subscription permission, such as created by aseqview, can be connected to any (MIDI) device ports using aconnect. For example, to connect from port 64:0 to 65:0, run as follows: 'aconnect 64:0 65:0'
    • To list ports: 'aconnect -i' and 'aconnect -o'
    • To remove all existing exported connections: 'aconnect -x', useful for terminating the ALSA drivers, because the modules with sequencer connections cannot be unloaded unless their connections are removed.
    • aseqview: as explained on https://tracker.debian.org/pkg/aseqview : removed from Debian
  • The aplay and arecord applications are for commandline playback and recording of a number of file types including raw, wave and aiff at all the sample rates, bitdepths and channel counts known to the ALSA library - DebbyBuster: ALC293 Analog
  • aplay - switch -l will list playback devices
There is an alsaplayer-jack module (Synaptics). Installed on GrayTiger, role?

There is a Debian 'aconnectgui', which does not seem to do anything useful on GrayTiger.

PulseAudio - by default present on GrayTiger

PulseAudio is a sound server providing mixing and input/output routing.
  • To find out if it's running: 'pgrep -a pulseaudio'
  • ---> GrayTiger: '1707 /usr/bin/pulseaudio --daemonize=no --log-target=journal'
Here are three ways to stop pulseaudio should that be necessary (reboot should bring it back):
  • pasuspender -- jackd -d alsa --device hw:1 --rate 44100 --period 128
  • echo "suspend 1" | pacmd // to unsuspend: echo "suspend 0" | pacmd
  • If you'd like to get rid of pulseaudio at boot, edit /etc/pulse/client.conf and set autospawn to no: 'autospawn = no'. After a reboot, pulseaudio will not come up unless you ask for it manually: $ pulseaudio --start

In a typical installation scenario under Linux, the user configures ALSA to use a virtual device provided by PulseAudio. Thus, applications using ALSA will output sound to PulseAudio, which then uses ALSA itself to access the real sound card.

So how do you configure ALSA? See ALSA.

PulseAudio operates as a proxy between sound applications and the audio hardware (usually via ALSA). PulseAudio Volume Control provides a "Monitor" device which listens for the audio output of other applications such as Firefox or Rhythmbox.

Setting PulseAudio Volume Control to capture from the Monitor device lets Audacity record computer playback when its input device is set to pulse.

See https://en.wikipedia.org/wiki/PulseAudio

Working with Audacity: https://manual.audacityteam.org/man/tutorial_recording_computer_playback_on_linux.html.

Applications that do not directly support Jack may also be used with Jack on a system that uses PulseAudio (such as Ubuntu and Debian based distributions) by installing "pulseaudio-module-jack".

This provides the modules "Jack Source" and "Jack Sink" that allow PulseAudio to use Jack.

For example, to record sounds playing through Firefox, PulseAudio Volume Control (pavucontrol) can be used to direct the output from Firefox to Jack Sink. The recording input for Audacity can then be set to record from "PulseAudio Jack Sink" and the sound will be recorded.

Linux sound files

JACK

JACK is a sound server for audio production. JACK controls your audio and MIDI settings. It allows you to choose your audio interface as well as all the important audio settings such as sample rate, buffer size and periods.

Any inputs and outputs from your audio interface and/or JACK aware programs can arbitrarily be connected together, i.e. ALSA, MIDI and audio connections. JACK not only deals with connections between programs but also within programs.

JACK takes over the soundcard on your computer. This means that your usual audio and video players will be broken while JACK is running. This includes rhythmbox, amarok, vlc, Adobe flash, etc....If your normal audio and video players aren't working, try "killall jackd".

Installation - JACK1 or JACK2?

On GreyTiger, Synaptics includes a whole series of JACK and related stuff. First decision: jack1 or jack2? Hence: https://github.com/jackaudio/jackaudio.github.com/wiki/Differences-between-jack1-and-jack2 => Use JACK2, as that is one being actively worked on and maintained. If you're feeling brave, try PipeWire.

Hence: preference for jack2, which comes with qjackctl. GrayTiger: Synaptics includes jackd, jackd1 (not selected), jackd2. TedFelix suggests jackd+jackd2.
  • Run the jackdaemon: 'jackd -d alsa --device hw:1 --rate 44100 --period 128'
  • ---> The playback device "hw:1" is already in use. Please stop the application using it and run JACK again
  • ---> Might be due to pulseaudio - try 'echo "suspend 1" | pacmd' but that makes no change .., Might be due to open Audacity, fluidsynth, qsynth, ...
  • ---> ps -e | grep fluidsynth ---> PID 418885 ---> sudo kill 418885 ---> jackd -d alsa --device hw:1 --rate 44100 --period 128 ---> starts ok now
  • Test jack: install 'jack-tools' via Synaptics
  • Generate a .wav file: 'sox -b 16 -n test.wav rate 44100 channels 2 synth 1 sine 440 gain -1'
  • Tell jack-play what JACK port to connect to via the JACK_PLAY_CONNECT_TO environment variable: 'export JACK_PLAY_CONNECT_TO=system:playback_%d'
  • Note: The "%d" is expanded to the channel number while connecting. So, with a stereo WAV file and the above value, jack-play will connect to system:playback_1 and system:playback_2.
  • Test jack: 'jack-play test.wav'
  • Stop jack: 'killall jackd'

JACK2 config and qjackctl

JACK2 config and .jackdrc - Ted Felix approach

Since most Linux music-making applications depend on JACK, and JACK's defaults are not suitable for music-making, we need to set up a .jackdrc file. The .jackdrc file lives in your home directory and it contains the command line that programs should use to start JACK if it isn't already running. E.g.: '/usr/bin/jackd -d alsa --device hw:1 --rate 44100 --period 128'

The only difference between this and what we did at the command line is the full pathname to jackd, /usr/bin/jackd. Make sure you set up a .jackdrc file.

GrayTiger: created a .jackdrc file in /home/marc, contents: '/usr/bin/jackd -d alsa --device hw:1 --rate 44100 --period 128'

Note: qjackctl (the JACK GUI) will clobber your .jackdrc file without warning. If you find .jackdrc useful, you should keep a backup of it and avoid qjackctl.

Jack and FluidSynth

Run fluidsynth with Jack - Ted Felix approach

To run fluidsynth with JACK, bring up another terminal, and: ''$ fluidsynth --server --audio-driver=jack --connect-jack-outputs /usr/share/sounds/sf2/FluidR3_GM.sf2'. This states a.o. 'fluidsynth: Connection of Midi Through Port-0 succeeded'.

To test, bring up another terminal and use aplaymidi to send a MIDI file to fluidsynth's port.
  • First use aplaymidi to find out what port number fluidsynth is waiting on: '$ aplaymidi -l'
  • If fluidsynth is on port 128: 'aplaymidi -p 128:0 song.mid'
  • On GrayTiger: aplaymidi -p 128:0 '/home/marc/Documents/Alpha/002-Sel Marc prive/201-MLS-Guitar/Z_MIDI_files/bach_846.mid'
Be sure to check which port fluidsynth is on, as it can change.

To bring everything down, first stop fluidsynth by entering the "quit" command at fluidsynth's ">" prompt. If it hangs here (Ubuntu 22.04 seems to have problems), press Ctrl+Z and then do a "pkill -9 fluidsynth".

Then switch to the terminal that is running JACK and hit Ctrl-C. Worst-case, you can use killall to stop JACK: '$ killall jackd'

JACK2 config - complementary info

See beginners guide.

Setting up JACK assumes that you have a properly configured realtime set up. KXStudio and AVLinux are two Linux distros that provide such an environment out of the box.

JACK itself is a command line program but there are graphical managers of two main types:
  • JACK set up managers, which allow you to start up JACK with specific settings
  • JACK connection managers, which primarily deal with making connections

qjackctl

Qjackctl allows access to many settings and includes a connection manager, transport controls and a session manager.
  • qjackctl.sourceforge.io
  • local: /usr/share/doc/qjackctl - start through gnome desktop, but it needs to be configured
Configuration on GrayTiger with qjackctl - jackdbus
Jackdbus implements D-Bus endpoint for JACK server, see https://github.com/LADI/jackdbus. D-Bus is an object model that provides IPC mechanism. D-Bus supports autoactivation of objects, thus making it simple and reliable to code a "single instance" application or daemon, and to launch applications and daemons on demand when their services are needed. There are many improvements over classical "jackd" approach.

QjackCtl includes a jackdbus frontend.
Configuration on GrayTiger with qjackctl - legacy
First attempt:
  • Error 'unable to connect to server'
  • 16:21:48.096 JACK is starting...
    16:21:48.102 /usr/bin/jackd -dalsa -dhw:0 -r44100
    Cannot connect to server socket err = No such file or directory
    ---> note that hw:0 should be hw:1 on GrayTiger
    
  • Change interface from default to ALC293 - starting now possible.
  • Configuring connections: start qjackctl, setup/settings/misc, 'replace connections with graph button' was activated by default - deactivate this to get the simple connections button used in the beginners guide

Comment:QjackCtl holds its settings and configuration state per user, in a file located as $HOME/.config/rncbc.org/QjackCtl.conf. Normally, there's no need to edit this file, as it is recreated and rewritten everytime qjackctl is run.

Second attempt:
  • Before getting started with JACK, be sure to close any audio applications you've been using. xxx continue here with Ted Felix xxx
Usage
Observation:
  • Connection: system capture, system playback, Midi Through, ...
  • Graph: system capture, system playback, Midi Through, ...
  • Patchbay: input and output sockets, each can be Audio, MIDI or ALSA - to avoid making the same connections every time you start JACK, the Patchbay will automatically make the defined connections every time JACK opens.
Observation:
  • In Audacity you can configure JACK as 'host' (but recording then leads to error msg)
  • You can start
  • If you start JACK via QjackCtl, you don't hear any sound anymore, and sound programs refuse to start

Clevo Debbybuster VMPK, FluidSynth and Jack

See Tex Felix. Install vmpk via Synaptics.
  • Kill FluidSynth - To stop fluidsynth, type "quit" at its ">" prompt. Or 'ps -e | grep fluid' and kill the process.
  • Kill Jack - 'killall jackd'
  • Kill pulseaudio - 'echo "suspend 1" | pacmd'
  • Try 'aconnect -x'.

From terminal: 'vmpk &'. Go to Edit > MIDI Connections and set the "MIDI OUT Driver" field to ALSA. Hangs and does not want to change.

Clevo Debbybuster Pipewire

Pipewire is installed but some components such as Pipewire ALSA server are not. If you select them for installation, Synaptics warns that a whole load of regular Gnome stuff will be removed. So probably better to install on a separate box.

Starting pipewire from commandline on GrayTiger results in the following msgs.
[E][000048160.516447][module-protocol-native.c:575 lock_socket()] server 0x563dfed421b0: unable to lock lockfile '/run/user/1000/pipewire-0.lock': Resource temporarily unavailable (maybe another daemon is running)
[E][000048160.516548][impl-module.c:281 pw_context_load_module()] "/usr/lib/x86_64-linux-gnu/pipewire-0.3/libpipewire-module-protocol-native.so": failed to initialize: Resource temporarily unavailable
[E][000048160.516612][main.c:126 load_module()] daemon 0x7ffc2e91d0c0: could not load module "libpipewire-module-protocol-native": Resource temporarily unavailable
[E][000048160.516628][main.c:441 main()] failed to load modules: Resource temporarily unavailable

Samson USB microphone

As this microphone is officially only supported on Microsoft and Mac, there is a connectivity problem on DebianBuster. You can rescan the transport in Audacity, but the application will continuously loose the microphone. Works better on Microsoft.

You need to connect the microphone before starting Audacity (green light on the mike must be on). On Debbybuster the microphone appears twice:
  • Samson G track Pro USB Audio - Headset
  • Samson G track Pro USB Audio - Internal
If you record with the Samson USB in Audacity, playback is not automatically via the laptop's speakers. You can plugin a headphone in the microphone to listen to your Audacity recording. Or you must unplug the microphone. There are probably options to set.

After two evenings of experimenting: decide to drop it, DebianBuster drops the USB connection after a couple of seconds or does not even find the hardware.

Audacity

Refer to local Audacity guidance.

Installations

Originally:
  • Linux: in 2022-01, DebbyBuster had Audacity 2.2.2 installed, via Synaptics.
  • Windows: in 2024-03, HP Envy had Audacity 3.4.2 installed, which uses .aup3 file format, incompatible with Audacity version prior to 3.
Linux: https://www.audacityteam.org/download/linux/ downloads an '.AppImage' file. Start it manually as:
  • cd /usr/marc/Downloads/Audacity
  • ./audacity....
If you need to remove Audacity 2.* on GrayTiger, use Synaptics.

Using USB turntable

Help via http://wiki.audacityteam.org/wiki/USB_turntables. You need to start and connect the USB Turntable before starting Audacity or it will not be recognized. You can see the USB TT being connected at usb level in /var/log/syslog, with a (rather long) device name. You need to configure the USB TT as an input device for Audacity, via /edit/preference, select it as "ALSA USB Audio CODEC". MP3 support? Under /edit/preferences/audiofiles you will also see "MP3 exporting plugin not found". This seems to be the file "libmp3lame.so.O". However: downloading with Audacity's download button fails, and finding it under Synaptics fails too. You can find it via http://packages.ubuntu.com/hardy/libs/liblame0 but it is only available for am64 and i386 architecture. But BlackBetty has an Atom processor. No luck. So either get source and compile, or try to export in eg OggVorbis, and convert that on Angkor. Or use another platform.

AV Linux

Installed on dedicated machine.

Muse Sounds Manager/Muse Score

Muse Sounds Manager

Info at https://musescore.org/en/handbook/4/installing-muse-sounds.

Download into /home/marc/Downloads/MuseSoundsManager. Then 'chmod +x Muse_Sounds_Manager...', then 'sudo apt install Muse_....'. Then you can start via the Gnome Desktop GUI.

You can select Muse Sounds instruments in the GUI, which get downloaded in a local directory

Muse Score Studio

For functionality see MuseScore in LW0311MUSIC.

Muse Score on GrayTiger

Try 1 Installed Muse Score 2 on Debian GrayTiger using Synaptic. Results in a.o.
/usr/bin/mscore
/usr/bin/musescore
/usr/share
/usr/share/applications
/usr/share/applications/mscore.desktop
/usr/share/doc
/usr/share/doc/musescore
Try 2: Where can you find the current Muse Score 4 for Linux? musescore.org only shows MSM for Linux... see https://github.com/musescore/MuseScore/releases Try 2 Installed Muse Score 4 on Debian GrayTiger
  • remove Muse Score 2 using Synaptics, then
  • download from https://github.com/musescore/MuseScore/releases into /home/marc/Downloads/MuseScore_4
Then sudo chmod +x ./MuseScore-Studio-4.3.2.241630832-x86_64.AppImage , then start the appimage file.

Muse Score 4 on AV Linux

Muse Score 4 is included in AV Linux.

Guitarix

Needs JACK to be started first. Needs a DI box to connect guitar to computer.

Tux-guitar

Needs JACK to be started first.

Impro-Visor

From https://www.cs.hmc.edu/~keller/jazz/improvisor/

Install:
  • cd /Downloads/Impro-Visor
  • chmod +x Impro-Visor_unix_5_16.sh
  • ./Impro-Visor_unix_5_16.sh
  • Outputs in /home/marc/Impro-Visor10.2
Startup: GNU GUI

Rosegarden

Installed on GrayTiger via Synaptics.

Hydrogen

Hydrogen 1.0.1 installed on GrayTiger via Synaptics.

Instruments are referred to as drumkits. System drumkits are read only and go e.g. into /usr/share/hydrogen/data/drumkits, or possibly /usr/local/share/hydrogen/data/drumkits. User drumkits are editable, usually stored in $HOME/.hydrogen/data/.

To see the log messages: open a terminal: 'hydrogen -VDebug'

Obtain more drumkits

Current /usr/share/hydrogen/data/drumkits are what you seen through the GUI.

.h2drumkits

Get ".h2drumkits" packages from the internet, such drumkits are packaged and ready to be installed by hydrogen itself, nothing to do with the linux packaging sytem.

Try https://musical-artifacts.com/artifacts?utf8=%E2%9C%93&q=hydrogen

Install drumkits one by one. Do 'Menu/drumkits/new/browse...' Try https://ladspa.org/cmt/plugins.html where plugin 1222 is an organ ?

Ubuntu

Get a deb package from ubuntu universe repo, called hydrogen-drumkits: http://packages.ubuntu.com/karmic/hydrogen-drumkits. Put it in a flashdrive and copy to your computer. You are not supposed to do anything more than installing it. Right click in the file and open with package installer GDebi. The installer will put the sound files and the drumkit definition (a xml file) into the right location. This location is: /usr/share/hydrogen/data/drumkits

GSequencer - dropped

Installed on GrayTiger via Synaptics. However, it quits constantly - drop. Maybe because it's a KDE thing.

Amarok

On Ubuntu 12.10, Amarok performs great. You can also use it to copy music to iPods. Mount the iPod, in Amarok select the music you want, rightclick and then "copy collection" to iPod. Occasionally hangs but in general it does the job.

Legacy---Seems to have lost MP3 support after upgrade to Lucid Lynx. Installing package "kubuntu-restricted-extras" should do the job. Does not seem logical at first sight since it does not contain any files that make me think about mp3... It seems that "Libxine1-ffmpeg contains MPEG-related plugins used by libxine1, the media player library used by Xine engine, which Amarok and other xine-based players use." Indeed, Amarok uses the Phonon Xine backend. But installing package "kubuntu-restricted-extras" did not solve the problem, Amarok still does not play. Other helpfiles state you need to install "libxine1-ffmpeg". It can be found in Synaptic, but when you install the message is that there are unresolved dependencies that cannot be solved. This includes eg "libavcodec52". You can find this via Synaptics too, but then installing it will remove what looks like a lot of useful other libraries and programs. So what? Synaptics/Settings/Repositories/Ubuntu: here you should select "software selected by copyright (multiverse)".

Rythmbox

Rhytmbox is a Gnome media player. Info at https://wiki.gnome.org/Apps/Rhythmbox. It uses the Music directory as default location for its music.

Bottom left there is a + sign that creates new playlists.

Configuration seems to be done automatically, I can't find any preferences or similar.
  • file:///home/marc/.local/share/rhythmbox/ e.g. file rhythmdb.xml
To remove all customisation, remove these folders:
  • ~/.local/share/rhythmbox
  • ~/.cache/rhythmbox - this is where e.g. album art is cached
  • ~/.gconf/apps/rhythmbox

iPod

iPod Classic/GTKPOD

You need to format the Classic under Windows. Then you can plug it in. Use Rhythmbox to transfer the whole music library via "synchronise" function. Mount the ipod.

Select it in the Rythmbox application, you will see buttons appear for "sync". You can also ask the properties of the ipod.

Legacy: Under Ubuntu 12.04, use Amarok to transfer music. Ubuntu 10.04 LTS (Lucid Lynx): use gtkpod ipod manager to create "repositories", one for the Linux based file system with the mp3's, another one for the iPod. Import files from filesystem to "MusicLibrary". Select files and rightclick to create playlist (NO smart playlist). You can then transfer playlists from the filesystem repository"MusicLibrary" to the iPod repository. However, you can only transfer mp3's, formats such as wma are not transferred. Unmount the iPod with filemanager (Dolphin), or with "umount /dev/sdh1".

File conversion towards mp3

Should work with Audacity or ffmpeg, but in some cases this fails due to "wma proprietary" format.

Legacy

Legacy: Lucid Lynx: physically connect iPod via USB, and then issue "mount" and you get:
  • kernel log: "write access to a journaled filesystem is not supported, use the force option at your own risk, mounting readonly."
  • mount output: "/dev/sdi2 on /media/SiLVER type hfsplus (rw,nosuid,nodev,uhelper=hal)"

Start gtkpod in terminal window. Gtkpod can read the iPod's music but cannot write, since it's mounted read-only. According to various sources mounting in rw is only possible if your iPod is formatted in FAT32 (which mine is not).

Networking

NET-3

Intro

According to the NET-2/3-HOWTO, since kernel 1.1.5 you have NET-3. Programs like ifconfig, route and netstat are called the NET-3 'utility suite'. Programs like telnet(d) etc. are called the 'network applications'.

Basic choices to be made

IP address : is defined per interface (ifconfig command) :
  • 127.0.0.1 for the loopback interface
  • 10.0.0.1 if Kassandra acts as a ppp server via nullmodem, this Class A address automatically uses 255.0.0.0 as netmask. And the IP address of your host is also stored in /etc/hosts.
Network address : is the AND of your IP address and your netmask : 10.0.0.1 AND 255.0.0.0 = 10.0.0.0 Broadcast address : is the network address OR the inverted netmask (cfr NET-2/3 HOWTO if you need this)

Router (gateway) : not necessary for loopback of PPP usage (but for PPP you may have to issue a "route add default gw 1.2.3.4" command).

Nameserver address : use the ISP's (or run named yourself)

rc files : to automate your configuration commands. Linux supports both BSD and SYS-V style rc commands.

/etc/rc.d/init.d/network : initial script that verifies the existence of /etc/sysconfig/network, which contains definitions like e.g. HOSTNAME=Kassandra. If it finds it, it cd's to /etc/sysconfig/network-scripts, where the configuration scripts (e.g. ifup-routes etc...) reside.
 

Traceroute - tracepath

The old traceroute may still be available but there is also the newer "tracepath". And there is also "lft" layer four trace. Great.

DNS - nslookup - ksoa

DNS on Kassandra (RH 5.0):

In /etc/host.conf I have 'order hosts, bind multi on'. This means: first check the host file (/etc/host), then use the nameservers (aka bind). Multi means that you accept multiple resolutions. This looks OK. However,there is no /etc/resolv.conf. Well, the 'resolv.conf' file gets automatically created via the control panel.

On Toothbrush (SuSE 5.3):

Use e.g. INnet's DNS on "194.7.1.4". Configure this via YaST, System Admin/Network config. This results in an "/etc/resolv.conf" file with the remark "don't edit, created via SuSE configuration editor". Not bad.

On boy (SuSE 6.0):

Similar to SuSE 5.3, use YaST to rely on INnet's DNS on "194.7.1.4". Careful when using DHCP, this simply overwrites your "/etc/resolv.conf"

nslookup - ksoa

Don't forget nslookup gives you plenty of info. And KDE comes with KSOA.

DNS on malekh

Apparently the INnet DNS server (194.7.1.4) went down at a certain point in time, so try:
  • www.dns.be (under 'domain index' you find name servers for all ISP's registered in Belgium
  • auth00.ns.be.uu.net - 194.7.1.9
  • auth50.ns.be.uu.net - 194.7.15.66

Trying this yields no successful name resolution, maybe these are internal name servers? Tried again later with the INnet DNS server, ok again.

Serial communication/nullmodem

Com ports

The serial ports COM1..COM4 have specific names under Linux, depending whether you use them for input or output:
  • COM1 out = /dev/cua0
  • COM1 in = /dev/ttyS0
  • COM2 out = /dev/cua1 ... (cfr Linux Serial-HOWTO)
So outgoing Netscape traffic will talk to /dev/cua0, and incoming nullmodem traffic will be listened to via /dev/ttyS0 or ttyS2 (pcmcia card)

Note the subtle different with VCs (Virtual Consoles), which are called tty1 etc, WITHOUT the 's' (tty1 versus ttyS0).

Remember 'setserial' sets up the serial ports at boot time. Try 'statserial' to find out the status of your 'pins'.

Incoming traffic : getty -mingetty-

For incoming communications, a getty program watches the port. This getty is started via INIT, with the definitions found in /etc/inittab. There you'll find lines stating : '1:12345:respawn:/sbin/mingetty tty1'.

I note a small inconsistency here: do ttys0 and tty1 match? Or not? CAREFULL : ttys0 is COM1, a serial port, tty1 is the first Virtual Console. So there is no inconsistency at all. ---

Also, 'man mingetty' informs me that this is 'minimal get tty' which does not support serial lines. So I first have to change the listening getty program. 'mingetty' suggests 'mgetty', but there's no manpage for that. -getty_ps & uugetty -

So let's look in the Serial-HOWTO, '/usr/doc/HOWTO/Serial-HOWTO.gz'. This explains how to set-up getty_ps and uugetty, but now where to get them from.

So let's look into Red Hat package manager etc. How to install getty_ps and uugetty? Well, do this via glint. Now how do we get getty_ps to listen to an incoming serial port? Right now, the /etc/inittab contains a definition like '1:12345:respawn/sbin/mingetty tty1' However, this only deals with the Virtual Consoles, hence the reference to tty1 rather than /dev/ttyS1. So add a line to /etc/inittab, making an executable out of the getty_ps package watch over ttyS0.

Question: what is the name of the loadmodule of 'getty_ps'?
Answer: Glint tells me that getty_ps is a package under 'utilities/system', the executables are /sbin/getty (for consoles) or uugetty (for modems). So I've added a line to let uugett watch over ttyS0, the incoming COM1 port.

In order to be able to let root login, I also added ttyS0 in /etc/securetty.

Question: how does setserial initialize my serials at boot time?
Answer: ...
 

securetty

The file /etc/securetty can be used to restrict the login of root to a particular tty port. Refer to 'man securetty' and 'man login' for interesting details.
 

Minicom

Basics Documentation:

Can be found in /usr/doc/minicom - man minicom - minicom -h The executable is typically /usr/bin/minicom.

Configuration goes e.g. /var/lib/minicom/minicom.users and minirc.dfl (defaults). On SuSE, I also noticed an "/etc/minicom.users". Check out the contents of the package via glint or rpm if in doubt. Minicom can talk to the modem via:
  • /dev/modem (if the link has been set)
  • /dev/cua0 or cua1 (on Borsalino /dev/cua1 was the external serial interface)
  • ---> apparently since kernel 2.2, cua is no longer used, it became /dev/ttyS...
  • /dev/ttyS0 (on Avina this is the external serial interface)
  • /dev/ttyS2 (on Aviana this is the pcmcia card modem)
This is defined in the Minicom-configuration.

Minicom configuration

Minicom can be configured in at least two ways:
  1. by running it with the -s switch: minicom -s
  2. once within minicom, use Alt-O or Cntl-A O (cOnfigure?)
You typically create an entry for your ISP via Cntl-A D.
Remember help is provided via Cntl-A Z, quitting is via Cntl-A Q.
 

Minicom trouble shooting

Make very sure dhcp-client is stopped (/sbin/init.d/dhclient stop - or /etc/init.d/...). If you get the message '/dev/modem is locked', you can at least try 2 solutions:
  1. identify the locking PID in the logfile and kill it: one way is to peek inside "/var/lock/LCK..cua1" here you'll find a PID. Kill it with e.g. "kill -n 9 PID".
  2. reboot the machine.
Minicom via serial interface apparantly won't run together with pcmcia services. So stop these, e.g. via Sys V init editor.

After using pcmcia & dhcp, DNS seems to be screwed up as well. You need to manually adjust "/etc/resolv.conf" again. That's why I created a "resolv.conf.original". Anf finally, if you want to surf, remember that Netscape might have been configured to go via a proxy (edit preferences - direct connection).

ppp

PPP basics

Ultimately, pppd lives as /usr/sbin/pppd. Options go in /etc/ppp. Then:
  1. Ensure dhcp-client has been stopped, e.g. /sbin/init.d/dhclient stop - or /etc/init.d/...
  2. Via minicom, dial out
  3. Logon to your ISP machine using your uid/psw
  4. [Optionally, you may need to start the pppd server on the ISP side (but this is rare)]
  5. Quit minicom without resetting the modem (Cntl-A Q or Alt-Q)
  6. Start pppd as a client, e.g.:
    1. cd /usr/lib/ppp
    2. pppd -d -detach/dev/____ &
  7. Optionally, you may need to define the ppp link as the default outgoing route. Do this in three steps:
    1. ifconfig will show you the other side of the ppp link, e.g. P-t-P: 193.74.1.238
    2. now do: route add default gw 193.....
    3. ping, e.g. your name server (cfr /etc/resolv.conf)
  8. check again with ifconfig, netstat, pppstats
  9. start your browser
  10. terminate with ppp-off.

PPP : info via control panel/package manager

Basic directories include : /etc/ppp (options) /usr/doc/ppp-2.2.0f-2 (readme's, scripts directory with lots of ppp scripts , ...)

===> README.linux is helpful, as well as : /usr/sbin/pppd /usr/sbin/pppstats
 

PPP server via nullmodem

No change required to /etc/inittab, you don't need a getty to watch over the port.

Starting the ppp daemons:
=> Server (Kassandra): pppd -d -detach crtscts lock 10.0.0.1:10.0.0.2 /dev/ttyS0 38400 &
=> Client (Bugis): pppd -d -detach crtscts lock 10.0.0.2:10.0.0.1 /dev/cua0 38400 &

Verify via 'ifconfig' command.
 

PPP on malekh

You have to:
  • Point minicom to the serial device /dev/ttyS0 (serial port) or ttyS2 (typically PCMCIA)....!
  • Reset /etc/resolv.conf (e.g. from /etc/resov.conf.original) - careful with SuSe, which starts from /etc/rc.config for name server configuration etc
  • If running, stop your dhcp client'/sbin/init.d/dhclient stop' - or /etc/init.d/... (pppd will allocate the IP address)
  • Launch ppp with "pppd -d -detach /dev/ttyS0 &" or ttyS2...!
  • Use "ifconfig" and "route add default gw ..."
  • If you want to surf, let Netscape use a direct link, no proxies (Edit/preferences/advanced/proxy)

tcpdump

tcpdump introduction

Remember the basic structure of the protocol stack:

Appl: _________________________| appl hdr / data |_______

TCP: _________________| TCP hdr | appl hdr / data |_______

IP:_____________| IP hdr | TCP hdr | appl hdr / data |_______

Eth:_____| Eth hdr | IP hdr | TCP hdr | appl hdr / data | Ethtrl |_______

Also, consider:
  • IP hdr includes 8 bit 'protocol'field, where 1=ICMP, 2=IGMP, 6=TCP, 17=UDP.
  • TCP/UDP hdr include the port numbers, with well-known ports defined in /etc/services.

Basic documentation can be found in /usr/doc/packages/tcpdump.

Tcpdump operates by putting the NIC in promiscuous mode (which must be allowed by the OS). Note that alternatives to tcpdump include Solaris' snoop and Aix iptrace.

Tcpdump relies on the kernel to capture and filter the packets for it. BSD-derived kernels provide BPF (BSD Packet Filter), Sun provides the NIT (Network Interface Tap). Linux provides LSF (Linux Socket Filtering), derived from the BPF. Check this out on /usr/src/linux... /Documentation/Networking/filter.txt

Filtering: BPF is instructed by the tcpdump process to put the interface into promiscuous mode, and to pass all packets to tcpdump or to filter some out. The filter is specified on the command line.1 By default, all packets should be captured. If the network outruns the box, packets are 'dropped'.

Timeout: since the data rate of the network can easily outrun the processing power of the CPU, and since it's costly for a userprocess to read from the kernel, BPF packs multiple frames into a single read buffer and returns only when the buffer is full, OR after a user-specified time-out (default 1 s).

tcpdump on malekh

Basic fact-finding: try running 'tcpdump -i eth0'.

According to the man page, tcpdump should by default capture all traffic. But how do we get it visualised? Flags include
  • -v and -vv for very verbose
  • -e print link-level header on each line
  • -s snaft snaplen bytes of data rather than the 68 bytes default
  • -a convert addresses into names - seems to generate nice output for SMB
  • expression: here you can specify e.g. TYPE: 'host foo', 'net 10.54', 'port 23', or DIRection: 'src foo', 'dst net 10.54', or PROTO: 'tcp port 21'


Question 1: where do we see/save the output?
Answer 1.1: use 'tcpdump' and the output goes to your screen.
Answer 1.2: use 'tcpdump -l > /root/tcpdumpdata1 & tail -f /root/tcpdumpdata1. The output goes to the file.
Question 2: what do we see?
Answer 2.1: Output is 'raw'. First the name of the itf, then a timestamp. Next sending host, then destination host.
Answer 2.2: I ran some tests and dumped them into /root/tcpdump123. Tcpdump's manpage states it was created to dump HEADERS of packets. Default lenght is 68 bytes, this can be changed with -s. Also, remember, it's called 'tcpdump', so we should be watching at the level of tcp (however...). How do we interprete?

Interesting add-on: tcpslice (checkout man tcpslice).

Also: checkout ITA: www.acm.org/sigcomm/ITA - the Internet Traffic Archive.

If you run 'ifconfig', you'll see the IP address of your eth0, and the PROMISC flag.

iptraf

A basic traffic monitor, monitors load, indicates types of traffic, etc. Apparently no real sniffer capability. Check out /usr/doc/packages/iptraf.

cmu-snmp

Great tools from Carnegie Mellon University. Includes snmpget/set/trap, and also snmpwalk... Installed by default via the package manager. Check out /usr/bin/snmp* for various commands.

Firewalls

dhcp - proxy servers - Brussels/KL - c4.net

General information on dhcp

The DHCP protocol is defined in RFC 2131 (obsoletes 1541). For Linux:
  • the dhcp server is "dhcpd", reading configuration information from "/etc/dhcpd.conf" and keeping track of leases in "dhcp.leases". Address pools are allocated per subnet.
  • the dhcp client is "dhclient", reading configuration info from "/etc/dhclient.conf".

No "howto". No "man dhcp"- however, there's a "man dhcpd". No info in /howto/Net3 manual. However, found a "mini-howto" (at the end of the "howto" directory => mini). Covers both client & server set-up,however seems outdated. Rather:

Client:
  • /etc/init.d/dhclient
  • /etc/dhclient.conf
  • man dhclient
  • man dhclient.conf

Quite easy to use Yast2 for configuration.
Server:
  • dhcpd (the server himself)
  • /etc/dhcpd.conf (config file)
  • man dhcpd
  • man dhcpd.conf
  • man dhcpd.leases

malekh - dhclient at PwC Brussels/KL

Yast: System Administration/Network/DHCP client. First install dhclient (series "n"). Then use Yast to activate it.

On start-up, dhclient reads "/etc/dhclient.conf". This:
  • contains information on what is expected from the dhcp server (subnet mask, broadcast address, dns, ...)
  • points to "/sbin/dhclient-script" where various "ifconfig" and "route add" commands are executed.

Note that also "/etc/resolv.conf" is typically overwritten, since you receive a dns server.

Within PwC Brussels, a W95 client tells me that:
  • the W95 box' own IP address (obtained via dhcp) is e.g. 10.54.18.216
  • the corresponding subnetmask is 255.255.252.0
  • the default gw is 10.54.16.2
  • dns and dhcp server are combined on 10.54.20.40
  • the outgoing proxy is found at 10.54.14.10 (used to be 10.54.20.04)

I've safeguarded working (at least @ PwC Brussels) versions /etc/dhclient.conf and dhcpd.conf in *.original files. In the Kuala Lumpur office, use the dhcp and the Sydney gateway (10.140.10.2) to surf out. Within DigiCert, use their internal www.digicert.com.my (port 8080) to surf out.

malekh - dhclient & server -v'portable'

Server

Configuration comes from '/etc/dhcpd.conf'. This contains essentially two types of statements:
  • parameters, e.g. how long lasts a lease, provide addresses to unknown clients, make suggestions with regard to default gw's, ...

  • declarations, e.g. describing the topology, the clients, the addresses they can use, ... This includes shared and subnet declarations.

Some core decisions for c4.net, taking into account the IPv4 address is 32 bits long, composed of network number and host number. Let's select a class B network address. This means: '10''--14-bits-network''---16-bits-host---', which makes 32 bits altogether. Class B ranges from 128.* to 191.* .

According to the rules for private networks (RFC1918), for class B, we can select between '172.16.0.0' and '172.31.255.255'. The standard subnetmask for class B is '255.255.0.0'.

So let it be: network '172.16.0.0', addresses ranging '172.16.0.10..20', with a subnetmask of '255.255.0.0'. Save this in '/etc/dhcpd.config'.

Two alternatives to start dhcpd:

  1. by updating '/etc/rc.config' (start dhcpd) and running '/sbin/SuSEconfig', or

  2. by '/sbin/init.d/dhcpd start'.

The second alternative is preferred. HOWEVER this runs into problems. The dhcpd parameters conflict with what's already defined in /etc/rc.config as IP address. SOLUTION:

  1. Manually stop your dhcp client'/sbin/init.d/dhclient stop';

  2. Manually 'ifconfig/ eth0 172.16.0.1'

Starting & stopping the dhcp server:

  • Starting up the dhcp server: '/sbin/init.d/dhcp start'.

  • Shutting down the dhcp server: '/sbin/init.d/dhcp stop'.

Starting & stopping the dhcp client:

  • Starting up the dhcp client: '/sbin/init.d/dhclient start'.

  • Shutting down the dhcp client: '/sbin/init.d/dhclient stop'.

VI.111.4 avina - dhclient

DHCP client is by default not installed, instead the DHCP server was automatically installed. Used YaST to remove the server and install the client. Then use Yast1 to configure and activate it.

VI.111.5 Overview of proxy servers

  • PwC Belgium: proxy-be, or 10.54.20.4 (remember to enable SSL 40-bit ! )

  • PwC UK: 10.44.240.41:80

  • PwC Australia: 10.140.10.2

VI.112 diald

Configuration via "/etc/diald.conf".
 

PCMCIA support

Intro - PCMCIA on boy - SuSE 6.0

Howto in "/usr/doc/howto/en/PCMCIA-HOW.gz". SuSE uses a Sys V init editor's "initscript". However, I don't find a script to start pcmcia. Script should be "/sbin/init.d/pcmcia". I don't have the script, I assume pcmcia is not installed.

PCMCIA installation on boy - SuSE 6.0

OK, pcmcia is a package of the "a" series, manually installed through YaST now. Card services is essentially a set of loadable modules. Use Sys V init editor. Remember: use "lsmod" to see what's loaded, however this reports no pcmcia is loaded. This seems to be a common problem according to the pcmcia howto. Some fact-finding on boy:
  • SuSE 6.0 manual p. 339 explains that "/etc/rc.config" defines whether the pcmcia subsystem is launched at boot time (PCMCIA=i82365 or tcic - whatever that means).
  • there exists /etc/pcmcia, which contains e.g. sample config files from David Hinds.
  • Howto: "/usr/doc/howto/en/PCMCIA-HOW.gz"
  • Package documentation: "/usr/doc/packages/pcmcia/..." - here file SUPPORTED.CARDS explicitly states that my 3COM 3C589C is supported. In fact, it's on the top of the list. The Howto in this directory states that virtually all cards are i82365.
  • SuSE's pcmcia start-up config is kept in "/etc/rc.config", with "/sbin/init.d/pcmcia" as the start-up script.
  • Programs include: cardmgr, cardinfo
  • Helpful: lsmod
So I included a "PCMCIA=i82365" statement in "/etc/rc.config". And I invoked "sbin/SuSEconfig". Reboot, works OK. However, seems to be incompatible with running Minicom.

VI.113.3 PCMCIA on malekh SuSE 6.1

QUESTION: How to install PCMCIA services?

Remember that pcmcia is a package of the 'a' series. Some fact finding:

  • No variable found in /etc/rc.config (should be PCMCIA="82365").

  • No references to pcmcia in the system log /var/log/messages .

  • Neither in /var/log/boot.msg .

  • No beep at boot time.

  • No reference to pcmcia in /var/adm/inst-log/installation-990828 .

Conclusion: no pcmcia package installed.

ANSWER

Tried YaST, but is not really elegant to install a single package. Used kpackage instead. After the installation of the pcmcia package, I ran SuSEconfig. As a result, /etc/rc.config got updated and now includes the PCMCIA=i82365 statement. Also, lsmod shows that pcmciacore and i82365 modules are loaded. Cardinfo works fine now.

Using the 3COM 3C589C card as eth0 works fine on the PwC Brussels LAN.
  • Insertion results in two high beeps, in rapid succession.
  • The log shows that an "insmod /...../3C589_cs.o" happens.
  • After that, the script "network start" is executed.
  • The cardinfo utility reports a 3C589 card.
Using the Xircom CEM-56-100 as modem is not that smoothly.
  • Insertion results in a first beep,followed by another one various seconds later. So there's a difference there...
  • The log shows an "insmode /..../xirc2ps_cs.o" happens.
  • Then "insmode /..../serial_cs.o" happens as well.
  • After that, script "network start eth0" is executed. Options are read from /etc/pcmcia/network.opts.
  • And script "serial start ttyS3". Options are read from /etc/pcmcia/serial.opts.
  • However, there is no feedback to be found after the execution of the scripts.
  • File /var/run/stab contains an entry for socket 1, stating the card is ttyS3, major node 4, minor node 67
  • Cardinfo reports a Xircom CEM56, both as eth0 and ttys3 as serial device. So the type is not entirely correct (CEM56 versus CEM56-100).
  • Also, "file /dev/modem" reveals it is a link to /dev/cua3. That looks OK.
  • Still,/ Minicom (or Seyon) does not seem to find it.
  • The PCMCIA HOWTO states that a single beep means the card was identified successfull. The second beep would mean the configuration went OK. This second beeps takes quite a while on malekh. The HOWTO suggests to run "sh -x /etc/pcmcia/serial start ttyS3". Which runs fine, ending in linking cua3 to modem. So far so good.
  • Still, Minicom doesn't find it.
  • ...maybe the problem's due to me using a CEM56-100, rather than a simple CEM56.
Further info:
  • Running "cardctl scheme" informs me that I have the default scheme.
  • File /var/run/pcmcia-scheme is empty...STRANGE...
  • Running "setserial /dev/modem" shows I have a UART 16550A, port ..., IRQ 5.

Using the WISEcom at MBS: cardinfo registers this as ttyS2, pointing minicom to /dev/ttyS2 gets me a reply of ATZ / OK - ATDT / NO CARRIER.
 

VI.113.4 PCMCIA on avina SuSE 6.4

By default, PCMCIA does not work, cardmgr reports "no pcmcia driver in /proc/devices". PCMCIA How-To: your base kernel modules do not load. The SuSE website indicates this is a bug.

Downloaded new pcmcia.rpm into /Avina/pcmcia.rpm, performed rpm -U /Avina/pcmcia.rpm . Now seems to discover the Toshiba chipset... Then download and install pcmcia_m.rpm . Reboot. Cardinfo now works and recognizes the 3COM Ethernet card. Then install dhcp client, and configure eth0 with dhcp addressing.

VI.113.5 PCMCIA on imagine SuSE 7.2

Package needs to be installed. Configuration via Yast1 or Yast2 does not work (at least not easily). Use 'cardctl status' to see if the card is found. Manually adjust /etc/rc.config, by making 'NETCONFIG_PCMCIA="_0" (i.e. the first device). Hey, apparently Yast2 decided (at an unknown point in time) to remove the dhclient software and to install dhcpd instead. This had to be manually adjusted again via Yast2. Also, in /etc/rc.config you may have to remake the adjustment to NETCONFIG_PCMCIA=_0. This seems to vanish occasionally too. Run SuSEconfig. NETCONFIG_PCMCIA="_0". Occasionally some other dhcp settings vanish. Apparently Yast2 is not so good in redefining them - Yast1 seems to do a better job.

xnetload - ntop

Try 'xnetload ppp0'. Try 'ntop'.

SAMBA - smbclient

Samba is a LanManager-like file manager for Unix, implementing SMB. Try "man samba". Key components include:
  • smbd, the server daemon, configured via smb.conf, handles file & print services
  • nmdb, the netbios name services daemon, can also be used interactively to query various name servers
  • /etc/smb.conf (configuration file, mainly oriented towards the server-side)

  • smbclient, a client that allows to access SMB shares e.g. in a WfWG environment - try "man smbclient"...
  • various other utilities such as smbprint, smbtar (dumping smb shares directly into tar), ...
  • testparm (a test utility)
  • smbstatus

So it must be possible to:
  1. use smbclient to work on a windows share (e.g. Win2000-Kassandros)
  2. use smbd to let a windows client access an avina 'share'

HISTORY PART 1/2 Using smbclient Playing ...
  • "smbstatus": not very helpful

  • "smbclient -L kassandros" : does not get you very far due to security

  • "smbclient -L kassandros -U administrator" : if then you provide the right password, you get a list of shares back

  • "smbclient //kassandros/tux -U administrator" : provide the psw, and you have access to /tux

  • use the -l flag e.g. "smbclient ... -l logfilename" to enable logging

  • use the -d flag e.g. "smbclient ... -d 3" to see debug messages in the log files

    • use "?" to list the possible commands now

  • "mkdir / mget (kassandros -> tux) / mput (tux -> kassandros) / ls ..."

  • "recurse" to turn recursion on/off for directory operations

  • "mask" to define exactly what is mget/mput when recurse is "on" (?)

  • "lcd" to position your local directory

  • "exit" to quit

Uploading files to kassandros (win2000):
  • Before using "smbclient", the Linux box needs to be able to resolve kassandros into 10.0.0.5 . A simple way is by editing "/etc/hosts" on Linux, and entering there the IP address that kassandros got from the dhcp server (on W2000, use "ipconfig" to learn kassondros' IP address)
  • Connect: "smbclient //kassandros/tux -U administrator" : provide the psw, and you have access to /tux - use "?" to list the possible commands ...
  • General preparation:
    • "recurse" (to turn recursion on - works alright on the local side, but does not recursively create subdirs on the server side ...)
    • "prompt" (to turn prompting off)
  • Adjust the local path:
    • "lcd /Java01Net" ("lcd" works on the local side)
    • "lcd" (acts as a local "pwd")
  • Adjust the remote path:
    • "pwd" ("pwd" works on the remote server)
    • "mkdir /Java999"
    • "cd /Java999" ("cd" works on the remote server)
  • Do the transfer:
    • "mput *.*"
    • "dir" ("dir" works on the remote server)
    • "du" ("du" works on the remote server)

OK BASICS WORK BUT RECURSION ON SERVER SIDE DOES NOT. CAN ONLY UPLOAD WITHIN 1 LEVEL OF THE DIRECTORY, OR MUST MANUALLY BUILD THE ENTIRE TREE. Try C$ share: "smbclient //kassandros/C$ -U administrator": does not work either. HISTORY PART 2/2 Using smbd Objective: establish the Linux box as a Samba server, offering shares to Win2000. Major problem: Win2000 only allows you to go out if the server you're connecting to supports encrypted passwords. Therefor: create initial smbpasswd entries via "cat /etc/passwd | /usr/lib/samba/scripts/mksmbpasswd.sh > /etc/smbpasswd". Encrypted passwords obviously go in "/etc/smbpasswd". As root, you can execute "smbpasswd -d marcsel" and "smbpasswd -a marcsel" to reset the password on this smb userid (password set to "samba").
  • create appropriate /etc/smb.conf
  • then execute "testparm"
  • then update /etc/rc.config to start samba, and run SuSEconfig
  • reboot to start smbd and nmdb
  • * alternatively: "/etc/rc.d/init.d/smb stop | start"
  • you get details in "/var/log/smb.log"
  • smbstatus
  • smbclient '\\TUX\HOMES' -U root ("homes" is the name of the share in /etc/smb.conf)
  • smbclient '\\TUX\AAATEST' -U marcsel (must be capital letters for the service, and you must use marcsel which was enabled via smbpasswd)
  • - and the access rights must allow marcsel to read/write AAATEST and subdirs ...
  • used "chmod -R o+r /Kassandra_Data/*" - this seemed to do the trick

sniffit

Downloaded sniffit, basic and patch file. Safeguarded into /Kassandra_Data/AdditionalRPM. Moved to '/' and unpacked. Also untarred the patch tar file, and moved the path to the source dir. Then 'patch Running 'configure' for a second time, the msg looked OK, what's in this 'config.status' file? Looks OK, also in 'configure.log'. Running 'make' for a second time: 'sniffit is up to date'. Thank you. But where is it??? OK, in /sniffit.0.3.5 there is an executable 'sniffit'. However, it comes back with 'cannot execute binary file'. So? Alternatively, reviewing the index file of SuSE61, sniffit seems to be distributed on CD3. Let's have a look. Unfortunately, it does not seem to be there. Back to the Internet. Mailed the author. HOWEVER: try '/sniffit.0.3.5/sniffit'. This works, but does not recognize the device, even if I try '-F eth0'.

VI.118 Networking source code

Check out /usr/src/linux..., particularly the make files and:

  • tcp.c: "/usr/src/linux.../net/ipv4/tcp.c": the implementation of tcp

  • ip: "/usr/src/linux.../net/ipv4/ip_input.c": the implementation of ip

Documentation can e.g. be found in "/usr/src/linux.../Documentation/networking/tcp.txt"

ISDN

Native ISDN connection

Here we deal with native (direct) ISDN connections, e.g. straight onto the S-bus.Check out:
  • /usr/doc/howto/en/NET-3 - contains a section on ISDN, and a pointer to a faq
  • /usr/src/linux.../documentation/ISDN
  • /usr/doc/packages/i4l
  • /usr/doc/packages/i4ldoc: FAQ:eng-i4l-faq.html
  • /usr/doc/packages/i4ldoc/howto - tutorial - ...
Command: isdnctrl.

VI.119.2 ISDN via 3COM 3C891 ISDN LAN modem

Typically:
  • reset the PCMCIA card (pull it out, plug it in)
  • physically connect malekh to the ISDN LAN modem
  • start your dhcp client ('/sbin/init.d/dhclient start')
  • use 'ifconfig' to see whether you did obtain an IP address for eth0
  • you can try to ping 192.168.1.1
  • point your browser to '192.168.1.1/mainpage' / alternatively, you can also telnet
  • psw for 3COM could be: qdge0416

VI.120 Fax

Check out /usr/doc/packages/hylafax. Here's a README.SuSE, providing installation instructions (start via faxsetup, which configures items such as your modem). There's also an html section, with lots of info. Apparently hylafax is the server, susefax is a client. Starting the client results in a nice GUI, but no server to talk to.

VI.121 NFS

NFS basics

NFS components:

  • portmapper (rpc.portmap or rpcbind) to map ports to rpc programs

  • rpcinfo, use "rpcinfo -p", or for the remote side "rpcinfo -p 10.0.0.3".

  • server-side: /etc/exports

  • server-side: nfsd (sometimes called rpc.nfsd) and mountd (sometimes called rpc.mountd)

  • client-side: mount/umount (you can use "-v" for verbose output, and -o timeo=n to increase the timeout value)

Of course, there is an NFS-HowTO. The "tar" seems to be the fastest way to pass files over. Debugging: on the server-side, running "rpcinfo -p" should show at least portmapper, mountd and nfsd running. You can also check /var/log/messages for daemon output. Easy way: make a tar file, export it to the client. On the client, move the tar file into "/" and untar it there.

History - NFS with tintin (compaq Philippe Dhainaut)

SAVING to tintin

Connect both tintin (NFS server) and malekh (client issuing 'mount') to a hub, then:

  1. on tintin: mkdir ttmalekh, and include a line "/ttmalekh (rw)" in /etc/exports

  2. on tintin: restart the nfs server

  3. on malekh: mkdir tintin

  4. on malekh: mount 192.168.1.3:/ttmalekh /tintin (use "-v" for verbose output, and "-o timeo=n" with default n starting at 7 increased to e.g. 21)

You can also use KDE to copy files (but it's slow).

RESTORING from tintin

Mounting on tintin from avina fails with the msg: RPC timed out. Try the other way round: On avina:

  • mkdir tintin

  • make sure the nfs server is started (in /etc/rc.config)

  • make sure outsiders can write to /tintin; here various alternatives exist, probably best is to handle it at individual user level & via PAM, however, for the time being: simply used MC to allow "others" to write

On tintin:

  • mount 192.168.1.253:/tintin /avina
  • cp -r Cryptix3 /avina (where the -r stands for recursively)
  • cp -r ...

NFS with tux (HP 4150)

SAVING to tux Connect both tux (NFS server) and avina (client issuing 'mount') to a hub, then:
  1. on tux: mkdir TuxAvina, and include a line "/TuxAvina (rw)" in /etc/exports
  2. on tux: restart the nfs server ("/sbin/init.d/nfsserver stop" "start")
  3. on avina: mkdir tux
  4. on avina: mount 10.0.0.5:/ttmalekh /tintin
    • => results in "mount: RPC: program not registered"
    • solution: NFS server was not started in /etc/rc.config on tux, hence update and run /sbin/SuSEconfig
    • => results in "RPC: timed out"
    • solution: simply retry (sometimes a reboot of the server machine seems to be necessary), then issue "mount"
Then you can use e.g. the cp command to copy files.

VI.122 Wireless

In general: use iwconfig, iwlist, iwspy, iwevent, iwpriv, wireless.... You may have to do e.g. "sudo iwlist scanning". On BlackBetty (Dell mini): connect to <-?-> by entering the password. When editing the connection you get request: 'nm-connection-editor' wants to access the password for 'Network secret for Auto <-?-> / 802-11-wireless-security/psk' in the default keyring.

VI.201 NetworkManager Kubuntu 12.10

On Angkor2, Kubuntu 12.10 comes with "NetworkManager". Good intro in Wikipedia. Documentation seems hard to find on the running Angkor2, but there is https://live.gnome.org/NetworkManager.

Cloud and virtualisation

AWS

Installation

Installing AWS CLI on GrayTiger:
  • Start from instructions at: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
  • Download into /Download, then sudo install, cli can be run via '/usr/local/bin/aws', or just 'aws' from Linux cli.
  • Then do: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html

Quickstart

For general use, the aws configure command is the fastest way to set up your AWS CLI installation. When you enter this command, the AWS CLI prompts you for four pieces of information:
  • Access key ID
  • Secret access key
  • AWS Region
  • Output format
    • json, yaml, yaml-stream, ...
    • text, table
The AWS CLI stores this information in a profile (a collection of settings) named default in the credentials file. By default, the information in this profile is used when you run an AWS CLI command that doesn't explicitly specify a profile to use. A collection of settings is called a profile. By default, the AWS CLI uses the default profile. You can create and use additional named profiles with varying credentials and settings by specifying the --profile option and assigning a name. The credentials and config file are updated when you run the command aws configure. The credentials file is located at ~/.aws/credentials on Linux or macOS, or at C:\Users\USERNAME\.aws\credentials on Windows. This file can contain the credential details for the default profile and any named profiles.

Usage

Refer to https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-using.html

Linux virt-manager

The virt-manager application is a desktop user interface for managing virtual machines through libvirt. It primarily targets KVM VMs, but also manages Xen and LXC (linux containers). It presents a summary view of running domains, their live performance & resource utilization statistics. Wizards enable the creation of new domains, and configuration & adjustment of a domain’s resource allocation & virtual hardware. An embedded VNC and SPICE client viewer presents a full graphical console to the guest domain.

See https://virt-manager.org/

Installed on GrayTiger using Synaptics. Start through gnome desktop. Create VM from image.

Linux KVM

KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.

Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

KVM is open source software. The kernel component of KVM is included in mainline Linux, as of 2.6.20. The userspace component of KVM is included in mainline QEMU, as of 1.3.

See https://linux-kvm.org/page/Main_Page. For KVM management see https://linux-kvm.org/page/Management_Tools .

To check if the KVM modules are enabled, run: 'lsmod | grep kvm'. Returns:
kvm_amd               151552  0
kvm                  1085440  1 kvm_amd
irqbypass              16384  1 kvm
ccp                   114688  1 kvm_amd
To check ownership of /dev/kvm, run : 'ls -al /dev/kvm', returns:
ls -al /dev/kvm
crw-rw----+ 1 root kvm 10, 232 Sep 24 17:14 /dev/kvm
Add your user to the kvm group in order to access the kvm device: 'sudo usermod -aG kvm $USER', returns:
sudo usermod -aG kvm marc
[sudo] password for marc: 
configuration error - unknown item 'ALWAYS_SET_PATH' (notify administrator)
marc@GrayTiger:~$ sudo usermod -aG kvm $marc
configuration error - unknown item 'ALWAYS_SET_PATH' (notify administrator)
Usage: usermod [options] LOGI

Docker

Docker basics

Intro:

Docker installation

See https://docs.docker.com/engine/install/debian/

Prerequisites

To install Docker Engine, you need the 64-bit version of one of these Debian versions:
  • Debian Bookworm 12 (stable)
  • Debian Bullseye 11 (oldstable) => GrayTiger
Before you can install Docker Engine, you must first make sure that any conflicting packages are uninstalled. => Synaptics tells me Debian has many Docker packages but none are installed on GrayTiger.

Distro maintainers provide an unofficial distribution of Docker packages in their repositories. You must uninstall these packages before you can install the official version of Docker Engine.

The unofficial packages to uninstall are: docker.io, docker-compose, docker-doc, podman-docker

Moreover, Docker Engine depends on containerd and runc. Docker Engine bundles these dependencies as one bundle: containerd.io. If you have installed the containerd or runc previously, uninstall them to avoid conflicts with the versions bundled with Docker Engine.

Run the following command to uninstall all conflicting packages: 'for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done' - results in
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package 'docker.io' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 325 not upgraded.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package 'docker-doc' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 325 not upgraded.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package 'docker-compose' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 325 not upgraded.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package podman-docker
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package 'containerd' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 325 not upgraded.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package 'runc' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 325 not upgraded.

'Images, containers, volumes, and networks stored in /var/lib/docker/ aren't automatically removed when you uninstall Docker. ' => not installed on GrayTiger.

Installation

Docker Engine for Debian is compatible with x86_64 (or amd64), armhf, arm64, and ppc64le (ppc64el) architectures.

You can install Docker Engine in different ways, depending on your needs:
  • Docker Engine comes bundled with Docker Desktop for Linux. This is the easiest and quickest way to get started. BUT I HAVE KVM ISSUES APPARENTLY...ref above
  • Set up and install Docker Engine from Docker's Apt repository.
  • Install it manually and manage upgrades manually.
  • Use a convenience scripts. Only recommended for testing and development environments.
Go for manual, see https://docs.docker.com/engine/install/debian/#install-from-a-package.
Step 1
Download the deb file for your release and install it manually. You need to download a new file each time you want to upgrade Docker Engine. Go to https://download.docker.com/linux/debian/dists/, select your Debian version in the list. => Bullseye.

Go to pool/stable/ and select the applicable architecture (amd64, armhf, arm64, or s390x). => https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/

Download the following deb files for the Docker Engine, CLI, containerd, and Docker Compose packages:
    containerd.io__.deb
    docker-ce__.deb
    docker-ce-cli__.deb
    docker-buildx-plugin__.deb
    docker-compose-plugin__.deb
=> Downloaded in /home/marc/Downloads/docker (always most recent version minus one)
Step 2
Install the .deb packages. Update the paths in the following example to where you downloaded the Docker packages.
$ sudo dpkg -i ./containerd.io__.deb \
  ./docker-ce__.deb \
  ./docker-ce-cli__.deb \
  ./docker-buildx-plugin__.deb \
  ./docker-compose-plugin__.deb
Results:

sudo dpkg -i ./containerd.io_1.6.8-1_amd64.deb 
[sudo] password for marc: 
Selecting previously unselected package containerd.io.
(Reading database ... 628390 files and directories currently installed.)
Preparing to unpack .../containerd.io_1.6.8-1_amd64.deb ...
Unpacking containerd.io (1.6.8-1) ...
Setting up containerd.io (1.6.8-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Processing triggers for man-db (2.9.4-2) ..

sudo dpkg -i ./docker-ce-cli_24.0.5-1~debian.11~bullseye_amd64.deb 
Selecting previously unselected package docker-ce-cli.
(Reading database ... 628418 files and directories currently installed.)
Preparing to unpack .../docker-ce-cli_24.0.5-1~debian.11~bullseye_amd64.deb ...
Unpacking docker-ce-cli (5:24.0.5-1~debian.11~bullseye) ...
Setting up docker-ce-cli (5:24.0.5-1~debian.11~bullseye) ...
Processing triggers for man-db (2.9.4-2) ...

sudo dpkg -i ./docker-ce_24.0.5-1~debian.11~bullseye_amd64.deb 
(Reading database ... 628615 files and directories currently installed.)
Preparing to unpack .../docker-ce_24.0.5-1~debian.11~bullseye_amd64.deb ...
Unpacking docker-ce (5:24.0.5-1~debian.11~bullseye) over (5:24.0.5-1~debian.11~bullseye) ...
Setting up docker-ce (5:24.0.5-1~debian.11~bullseye) ...
configuration error - unknown item 'ALWAYS_SET_PATH' (notify administrator)
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.

sudo dpkg -i ./docker-buildx-plugin_0.11.1-1~debian.11~bullseye_amd64.deb 
Selecting previously unselected package docker-buildx-plugin.
(Reading database ... 628615 files and directories currently installed.)
Preparing to unpack .../docker-buildx-plugin_0.11.1-1~debian.11~bullseye_amd64.deb ...
Unpacking docker-buildx-plugin (0.11.1-1~debian.11~bullseye) ...
Setting up docker-buildx-plugin (0.11.1-1~debian.11~bullseye) ...

sudo dpkg -i ./docker-compose-plugin_2.5.0~debian-bullseye_amd64.deb 
Selecting previously unselected package docker-compose-plugin.
(Reading database ... 628619 files and directories currently installed.)
Preparing to unpack .../docker-compose-plugin_2.5.0~debian-bullseye_amd64.deb ...
Unpacking docker-compose-plugin (2.5.0~debian-bullseye) ...
Setting up docker-compose-plugin (2.5.0~debian-bullseye) ...
The Docker daemon starts automatically.
Step 3
Verify that the Docker Engine installation is successful by running the hello-world image:
  • sudo service docker start
  • sudo docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits. Returns:
sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
719385e32844: Pull complete 
Digest: sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64)
 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.

To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/

For more examples and ideas, visit: https://docs.docker.com/get-started/

Files and database storage

Peeking inside a large file with less

The 'less' command shows file contents, also for really large files. Use arrows-up/down to scroll.

Splitting a large file with split

The split command

Use 'split file-you-want-to-be-split prefix-for-new-files'.
  • By default the outputfile's size is 1000 lines
  • To have another outputfile size: specify '-b, --bytes=SIZE' to put SIZE bytes per output file

Splitting data.world L1 ttl (unnecessary if you use graphdb's server import feature

Download. Do I need to split it? Try direct load into workbench .... Error message: 'Too big, use Server File import'. Move ttl file to /home/marc/graphdb-imports. File shows up in Workbench/Server Files. Alt: split but in 200 MB or so chunks

Splitting a gleif-concatenated file - first attempts - legacy

Splitting a GLEIF concatenated file with split - legacy

Use 'split file-you-want-to-be-split prefix-for-new-files'. For example: 'split 20221107-gleif-concatenated-file-lei2.xml 20221107_Concat_Split'. This yields 100.000+ files of around 50 KB.

Just splitting will cut off your file somewhere in the middle, so the OWL XML is no longer valid (post factum observation: it's not even OWL in a concat file). Shortcut: copy file '20221107_Concat_Splitaa' into '20221107_Concat_Splitaa.owl'. Edit file, throw away incomplete part, add new end tags.

Thrown away end: QUOTE 029200156B8S2G1I9C02 NIGERIAN INTERNATIONAL SECURITIES LIMITED 3 ALHAJI KANIKE STREET OFF AWOLOWO ROAD, S.W IKOYI LAGOS NG-LA NG UNQUOTE

File layout

Observation: LEI records start around line 207, after Then starts:

Parsing errors

'RDF Parse Error: unexpected literal [line 8, column 78]' Reason unsure. What is the problem?
  • Related to blanks right after '>'?
  • Related to the fact it's XML but invalid OWL?

Assuming it is related to being invalid OWL syntax - correct assumption

OWL file should contain ... owl:class, owl:objectproperty, owl:uniqueindividual etc. Indeed, it's only plain XML.

Assuming it is related to invalid blanks: use the tr command - incorrect assumption

The tr command reads a byte stream from standard input (stdin), translates or deletes characters, then writes the result to standard output (stdout).

We can use the tr command’s -d option – for deleting specific characters – to remove whitespace characters. The syntax is: tr -d SET1.

So, depending on the requirement, passing the right SET1 characters to tr allows to either remove only horizontal whitespace characters or all whitespace.

Removing Horizontal Whitespace only: tr defines the “[:blank:]” character set for all horizontal whitespace. E.g. tr -d "[:blank:]" < raw_file.txt | cat -n. This removes all horizontal whitespace but keeps the line breaks.

Remove all whitespace characters from the file using the “[:space:]” character set which means all horizontal and vertical whitespace: E.g. tr -d "[:space:]" < raw_file.txt. Or in our case: tr -d "[:space:]" < 20221107_Concat_Splitaa3.owl > splitout.owl'.

Then: 'RDF Parse Error: White space is required between the processing instruction target and data. [line 1, column 18]'

Mounting Android

Connecting Galaxy phone to Debbybuster does not work anymore. Systemlog reveals: gvfsd[1347]: Error 1: Get Storage information failed. Googling indicates a bug in libmtp as possible source. What's my version of libmtp, how do I upgrade it?

MTP

Media Transfer Protocol (MTP) is used for transferring files between devices. Notably, between newer Android or Microsoft smartphones and your Debian host. See https://wiki.debian.org/mtp GNOME applications (like GNOME_Files and Archive_Manager) use GIO-based GVFS (the GNOME virtual file system - ) to mount MTP volumes and access files on MTP devices:

gvfs - gnome virtual filesystem

Install package gvfs-backends Query "apt list --all-versions > /home/marc/apt-list.txt" reveals: gvfs-backends/oldstable,now 1.38.1-5 amd64 [installed,automatic] gvfs-bin/oldstable 1.38.1-5 amd64 gvfs-common/oldstable,now 1.38.1-5 all [installed,automatic] gvfs-daemons/oldstable,now 1.38.1-5 amd64 [installed,automatic] gvfs-fuse/oldstable,now 1.38.1-5 amd64 [installed,automatic] gvfs-libs/oldstable,now 1.38.1-5 amd64 [installed,automatic] gvfs/oldstable,now 1.38.1-5 amd64 [installed,automatic] So there's gvfs installed - backends version 1.38.1-5....

gMTP

A desktop-agnostic tool that allows you to simply mount and manage files on MTP-connected devices is gMTP, which can be obtained by installing the gmtp package. Starts a GUI but cannot connect to Galaxy phone.

mtp-tools

See https://packages.debian.org/buster/mtp-tools I have "mtp-tools/oldstable 1.1.16-2 amd64". This should contain utilities such as mtp-detect, mtp-files etc. However, these cannot be executed (cmd not found). Commands are described at http://manpages.ubuntu.com/manpages/trusty/man1/mtp-tools.1.html

Berkeley db files

This database format is used e.g. by Netscape Communicator. Refer to www.sleepycat.com.

Oracle

To be done.

Applications - WWW clients

Firefox

Basics

For an intro refer to Mozilla/Firefox.

Initial installation on GrayTiger

Installed as part of Debian. Status: 'apt list --installed | grep firefox', showing:
  • firefox-esr-l10n-en-gb/stable,now 91.13.0esr-1~deb11u1 all [installed,upgradable to: 102.8.0esr-1~deb11u1]
  • firefox-esr/stable,now 91.13.0esr-1~deb11u1 amd64 [installed,upgradable to: 102.8.0esr-1~deb11u1]
Package info: apt show packagename, resulting in:
  • Package: firefox-esr-l10n-en-gb
  • Version: 91.13.0esr-1~deb11u1
  • Priority: optional
  • Section: localization
  • Source: firefox-esr
  • Maintainer: Maintainers of Mozilla-related packages
  • Installed-Size: 676 kB
  • Depends: firefox-esr (>= 91.13.0esr-1~deb11u1), firefox-esr (<< 91.13.0esr-1~deb11u1.1~)
  • Recommends: hunspell-en-gb | hunspell-en-us
  • Download-Size: 536 kB
  • APT-Manual-Installed: no
  • APT-Sources: http://deb.debian.org/debian bullseye/main amd64 Packages
  • Description: English (United Kingdom) language package for Firefox ESR. Firefox ESR is a powerful, extensible web browser with support for modern web application technologies. This package contains the localization of Firefox ESR in English (United Kingdom).
What is inside the package: dpkg -l firefox-esr-l10n-en-gb, showing:
  • /usr/lib/firefox-esr and below
  • /usr/share/doc/firefox-esr-l10n-en-gb and below
And: you find info and config in /home/marc/.mozilla

Upgrading Firefox installation on GrayTiger 202407

Do 'apt list --installed | grep firefox' to find out:
firefox-esr-l10n-en-gb/now 91.13.0esr-1~deb11u1 all [installed,upgradable to: 115.13.0esr-1~deb11u1]
firefox-esr/now 91.13.0esr-1~deb11u1 amd64 [installed,upgradable to: 115.13.0esr-1~deb11u1]
Interim conclusion: 'upgradable'. How? See 'https://support.mozilla.org/en-US/kb/update-firefox-latest-release'. Explains how to do it within Firefox itself. Shows '91.13.0esr (64-bit)', 'when an update is available it will be downloaded automatically'. But nothing happens, so no update available... Still, version seems outdated.

See https://support.mozilla.org/en-US/kb/install-firefox-linux for info.

See https://www.mozilla.org/en-GB/firefox/download/thanks/ for setting up APT repo on Debian, details on https://support.mozilla.org/en-US/kb/install-firefox-linux.
Install Firefox .deb package for Debian-based distributions

To install the .deb package through the APT repository, do the following:

    Create a directory to store APT repository keys if it doesn't exist: sudo install -d -m 0755 /etc/apt/keyrings
     >OK 
    Import the Mozilla APT repository signing key:
    wget -q https://packages.mozilla.org/apt/repo-signing-key.gpg -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null
    >OK, there's a key present now
    
    The fingerprint should be 35BAA0B33E9EB396F59CA838C0BA5CE6DC6315A3. You may check it with the following command:

    gpg -n -q --import --import-options import-show /etc/apt/keyrings/packages.mozilla.org.asc | awk '/pub/{getline; gsub(/^ +| +$/,""); if($0 == "35BAA0B33E9EB396F59CA838C0BA5CE6DC6315A3") print "\nThe key fingerprint matches ("$0").\n"; else print "\nVerification failed: the fingerprint ("$0") does not match the expected one.\n"}'
    >OK, matches
    
    Next, add the Mozilla APT repository to your sources list:
    echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] https://packages.mozilla.org/apt mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null
    >OK, included 
    
    Configure APT to prioritize packages from the Mozilla repository:

    echo '
    Package: *
    Pin: origin packages.mozilla.org
    Pin-Priority: 1000
    ' | sudo tee /etc/apt/preferences.d/mozilla
    
    >OK, or so it seems

    Update your package list and install the Firefox .deb package: sudo apt-get update && sudo apt-get install firefox
    >Results in: 
    
    sudo apt-get update && sudo apt-get install firefox
Hit:1 http://deb.debian.org/debian bullseye InRelease
Hit:2 http://security.debian.org/debian-security bullseye-security InRelease   
Hit:3 http://deb.debian.org/debian bullseye-updates InRelease                  
Hit:4 http://deb.debian.org/debian bullseye-backports InRelease                
Hit:6 https://eid.static.bosa.fgov.be/debian bullseye InRelease                
Get:7 https://dl.google.com/linux/chrome/deb stable InRelease [1,825 B]        
Hit:8 https://files.eid.belgium.be/debian bullseye InRelease                   
Hit:9 https://packages.microsoft.com/repos/ms-teams stable InRelease
Hit:5 https://packages.microsoft.com/repos/code stable InRelease
Get:10 https://packages.mozilla.org/apt mozilla InRelease [1,528 B]
Err:7 https://dl.google.com/linux/chrome/deb stable InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E88979FB9B30ACF2
Get:11 https://packages.mozilla.org/apt mozilla/main all Packages [12.8 MB]
Get:12 https://packages.mozilla.org/apt mozilla/main amd64 Packages [227 kB]
Fetched 13.1 MB in 6s (2,113 kB/s)                                             
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://dl.google.com/linux/chrome/deb stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E88979FB9B30ACF2
W: Failed to fetch https://dl.google.com/linux/chrome/deb/dists/stable/InRelease  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E88979FB9B30ACF2
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  firefox
0 upgraded, 1 newly installed, 0 to remove and 395 not upgraded.
Need to get 70.3 MB of archives.
After this operation, 254 MB of additional disk space will be used.
Get:1 https://packages.mozilla.org/apt mozilla/main amd64 firefox amd64 128.0.3~build1 [70.3 MB]
Fetched 70.3 MB in 24s (2,912 kB/s)                                            
Selecting previously unselected package firefox.
(Reading database ... 639502 files and directories currently installed.)
Preparing to unpack .../firefox_128.0.3~build1_amd64.deb ...
Unpacking firefox (128.0.3~build1) ...
Setting up firefox (128.0.3~build1) ...
Processing triggers for mailcap (3.69) ...
Processing triggers for desktop-file-utils (0.26-1) ...
Processing triggers for hicolor-icon-theme (0.17-2) ...
Processing triggers for gnome-menus (3.36.0-1) ...
    
    
Synaptics now shows that version 128 is installed, in /usr/bin and /usr/lib. Starting Firefox from Gnome GUI still starts the old version. Solution: launch /usr/bin/firefox, or use secondary firefox icon in Gnome GUI.

Belgian eID

For BeID extension refer to Belpic information.

Specific topics

Addblock

Addblock

Liveheaders

liveheaders

Hackbar

Hackbar

Building Firefox from source

See https://firefox-source-docs.mozilla.org/setup/linux_build.html#building-firefox-on-linux

Chrome

Basics

For an intro refer to Chrome.

Installation

'sudo dpkg -i google-chrome......deb'. What is installed: 'apt list --installed | grep chrome', showing:
  • chrome-gnome-shell/stable,now 10.1-5 all [installed,automatic]
  • google-chrome-stable/now 99.0.4844.74-1 amd64 [installed,upgradable to: 110.0.5481.100-1]
What is inside the package of chrome itself: dpkg -l google-chrome-stable, showing: dpkg -L 'google-chrome-stable' /. /etc /etc/cron.daily /opt /opt/google /opt/google/chrome /opt/google/chrome/MEIPreload /opt/google/chrome/MEIPreload/manifest.json /opt/google/chrome/MEIPreload/preloaded_data.pb /opt/google/chrome/WidevineCdm /opt/google/chrome/WidevineCdm/LICENSE /opt/google/chrome/WidevineCdm/_platform_specific /opt/google/chrome/WidevineCdm/_platform_specific/linux_x64 /opt/google/chrome/WidevineCdm/_platform_specific/linux_x64/libwidevinecdm.so /opt/google/chrome/WidevineCdm/manifest.json /opt/google/chrome/chrome /opt/google/chrome/chrome-sandbox /opt/google/chrome/chrome_100_percent.pak /opt/google/chrome/chrome_200_percent.pak /opt/google/chrome/chrome_crashpad_handler /opt/google/chrome/cron /opt/google/chrome/cron/google-chrome /opt/google/chrome/default-app-block /opt/google/chrome/default_apps /opt/google/chrome/default_apps/external_extensions.json /opt/google/chrome/google-chrome /opt/google/chrome/icudtl.dat /opt/google/chrome/libEGL.so /opt/google/chrome/libGLESv2.so /opt/google/chrome/liboptimization_guide_internal.so /opt/google/chrome/libvk_swiftshader.so /opt/google/chrome/libvulkan.so.1 /opt/google/chrome/locales /opt/google/chrome/locales/am.pak /opt/google/chrome/locales/ar.pak /opt/google/chrome/locales/bg.pak /opt/google/chrome/locales/bn.pak /opt/google/chrome/locales/ca.pak /opt/google/chrome/locales/cs.pak /opt/google/chrome/locales/da.pak /opt/google/chrome/locales/de.pak /opt/google/chrome/locales/el.pak /opt/google/chrome/locales/en-GB.pak /opt/google/chrome/locales/en-US.pak /opt/google/chrome/locales/es-419.pak /opt/google/chrome/locales/es.pak /opt/google/chrome/locales/et.pak /opt/google/chrome/locales/fa.pak /opt/google/chrome/locales/fi.pak /opt/google/chrome/locales/fil.pak /opt/google/chrome/locales/fr.pak /opt/google/chrome/locales/gu.pak /opt/google/chrome/locales/he.pak /opt/google/chrome/locales/hi.pak /opt/google/chrome/locales/hr.pak /opt/google/chrome/locales/hu.pak /opt/google/chrome/locales/id.pak /opt/google/chrome/locales/it.pak /opt/google/chrome/locales/ja.pak /opt/google/chrome/locales/kn.pak /opt/google/chrome/locales/ko.pak /opt/google/chrome/locales/lt.pak /opt/google/chrome/locales/lv.pak /opt/google/chrome/locales/ml.pak /opt/google/chrome/locales/mr.pak /opt/google/chrome/locales/ms.pak /opt/google/chrome/locales/nb.pak /opt/google/chrome/locales/nl.pak /opt/google/chrome/locales/pl.pak /opt/google/chrome/locales/pt-BR.pak /opt/google/chrome/locales/pt-PT.pak /opt/google/chrome/locales/ro.pak /opt/google/chrome/locales/ru.pak /opt/google/chrome/locales/sk.pak /opt/google/chrome/locales/sl.pak /opt/google/chrome/locales/sr.pak /opt/google/chrome/locales/sv.pak /opt/google/chrome/locales/sw.pak /opt/google/chrome/locales/ta.pak /opt/google/chrome/locales/te.pak /opt/google/chrome/locales/th.pak /opt/google/chrome/locales/tr.pak /opt/google/chrome/locales/uk.pak /opt/google/chrome/locales/vi.pak /opt/google/chrome/locales/zh-CN.pak /opt/google/chrome/locales/zh-TW.pak /opt/google/chrome/nacl_helper /opt/google/chrome/nacl_helper_bootstrap /opt/google/chrome/nacl_irt_x86_64.nexe /opt/google/chrome/product_logo_128.png /opt/google/chrome/product_logo_16.png /opt/google/chrome/product_logo_24.png /opt/google/chrome/product_logo_256.png /opt/google/chrome/product_logo_32.png /opt/google/chrome/product_logo_32.xpm /opt/google/chrome/product_logo_48.png /opt/google/chrome/product_logo_64.png /opt/google/chrome/resources.pak /opt/google/chrome/swiftshader /opt/google/chrome/swiftshader/libEGL.so /opt/google/chrome/swiftshader/libGLESv2.so /opt/google/chrome/v8_context_snapshot.bin /opt/google/chrome/vk_swiftshader_icd.json /opt/google/chrome/xdg-mime /opt/google/chrome/xdg-settings /usr /usr/bin /usr/share /usr/share/appdata /usr/share/appdata/google-chrome.appdata.xml /usr/share/applications /usr/share/applications/google-chrome.desktop /usr/share/doc /usr/share/doc/google-chrome-stable /usr/share/doc/google-chrome-stable/changelog.gz /usr/share/gnome-control-center /usr/share/gnome-control-center/default-apps /usr/share/gnome-control-center/default-apps/google-chrome.xml /usr/share/man /usr/share/man/man1 /usr/share/man/man1/google-chrome-stable.1.gz /usr/share/menu /usr/share/menu/google-chrome.menu /etc/cron.daily/google-chrome /usr/bin/google-chrome-stable /usr/share/man/man1/google-chrome.1.gz

Start-up

Observing the need to update

Starting Chrome through Desktop gui I see:
  • Nearly up to date! Relaunch Chrome to finish updating.
  • Version 99.0.4844.74 (Official Build) (64-bit)
Relaunching loops and gets me back at the same message. Searching: /usr/bin/google-chrome-stable is a bashfile to start chrome. Loops. What version should I be on: https://en.wikipedia.org/wiki/Google_Chrome_version_history tells me: 108 or 109 or 110 seems reasonable, 99 seems old. So I should probably remove and reinstall. There are .deb packages in /home/marc/Downloads/chrome.

Updating

Approach: remove current chrome installation, check everything is gone, reinstall...
Removal
Command: 'apt-get --purge remove foo'. E.g. sudo apt-get --purge remove globalprotect. So:
  • To remove 'chrome-gnome-shell': sudo apt-get --purge remove chrome-gnome-shell : OK on 23/2/2023 15:11
  • To remove 'google-chrome-stable': sudo apt-get --purge remove google-chrome-stable : OK on 23/2/2023 15:12
Install .deb
Possible installation commands:
  • sudo apt-get install package name (frequently the package name is simply the name of the desired executable application),
  • sudo apt-get update,
  • sudo apt-get upgrade
So: download from https://www.google.com/chrome/ => gets you 'google-chrome-stable_current_amd64.deb'.
  • To install the downloaded 'google-chrome-stable_current_amd64.deb': sudo apt install ./google-chrome-stable_current_amd64.deb
Result: Reading package lists... Done Building dependency tree... Done Reading state information... Done Note, selecting 'google-chrome-stable' instead of './google-chrome-stable_current_amd64.deb' The following NEW packages will be installed: google-chrome-stable 0 upgraded, 1 newly installed, 0 to remove and 198 not upgraded. Need to get 0 B/94.0 MB of archives. After this operation, 319 MB of additional disk space will be used. Get:1 /home/marc/Downloads/chrome/google-chrome-stable_current_amd64.deb google-chrome-stable amd64 110.0.5481.177-1 [94.0 MB] Selecting previously unselected package google-chrome-stable. (Reading database ... 331315 files and directories currently installed.) Preparing to unpack .../google-chrome-stable_current_amd64.deb ... Unpacking google-chrome-stable (110.0.5481.177-1) ... Setting up google-chrome-stable (110.0.5481.177-1) ... update-alternatives: using /usr/bin/google-chrome-stable to provide /usr/bin/x-w ww-browser (x-www-browser) in auto mode update-alternatives: using /usr/bin/google-chrome-stable to provide /usr/bin/gno me-www-browser (gnome-www-browser) in auto mode update-alternatives: using /usr/bin/google-chrome-stable to provide /usr/bin/goo gle-chrome (google-chrome) in auto mode Processing triggers for gnome-menus (3.36.0-1) ... Processing triggers for man-db (2.9.4-2) ... Processing triggers for mailcap (3.69) ... Processing triggers for desktop-file-utils (0.26-1) ...
Starting/configuring Chrome
Start Chrome via Gnome GUI. 'Help' shows now running 110..... : Version 110.0.5481.177 (Official Build) (64-bit) Use Settings/Autofill/Password manager to inspect your passwords managed by Chrome. Seems ok.

Gnome account manager

Via 'Gnome Settings/Online accounts' you can add accounts such as a Google account, which handles the authentication for e.g. for Google drive.

Gnome Web

Web 3.38.2

Netscape client - legacy

Netscape 4.03 on bugis etc (manual install)

Ftp to 'ftp.netscape.com', cd to /pub/communicator/4.03/shipping/english/unix/... . Carry out a get into /RMS_Programs/Netscape/navi.... . Gunzip, tar -xvf. Then run ns-install. First time : fails, even logs me out. Browsing ns-install. Run it a second time. OK. Executable goes into /usr/local/netscape/netscape. Added an entry in system.fvwmrc to call it.

Netscape 4.05 (part of SuSE 5.3)

Installation and configuration

As Navigator 4.05 is part of SuSE, it gets (almost) automatically installed. Basic files go into /opt/netscape. Plugins reside in /opt/netscape/plugins. Caching goes into /root/.netscape/cache etc. Don't forget to clean-up every now and then. Resetting your visited links: edit/preferences/navigator/clear history. Alternatively, go to /root/.netscape and clean out manually.

LDAP client included now (edit/search directory)

Good info can be found at Netscape's developer's site. Configuration can be done via:
  • Mission Control (central management tool);
  • The Netscape client's GUI (Preferences...);
  • netscape.cfg (the basic configuration file), note that this file can point to an AutoConfigURL, which will also be read;
  • config.jsc (the configuration file for Professional Edition)
  • directly editing resources such as bitmaps and help files (not recommended unless you use Mission Control to distribute your changes);
Don't forget that firewalls and proxy servers can also influence the behaviour of your browser (e.g. locking out https).

Netscape 4.72 (part of SuSE 6.4)

Surfing

Surfing in the PwC office: needs PwC DNS (10.54.72.40) and proxy (proxy-be, port 8080 for http, https and ftp). Also needs to accept the certificate from the firewall (10.54.20.4), which is signed by PWC_TREE. The PWC_TREE certificate is not a root-signer and hence is not visible through the Netscape GUI-view on the certificate db. Nevertheless, you can view it (e.g. with mc), and then you'll notice there is a Novell certificate attribute embedded, including url.

Security configuration

All certificates go in /root/.netscape/cert7.db . This includes own personal certificate. You can use e.g. 'mc' to browse the contents of this cert7.db file. Alternatively, go Communicator/Security Info/Cryptographic Modules and select e.g. Netscape internal PKCS #11 module. Here you find 'slots', one for crypto services and another one for certificate db. Here you can configure, change password, login/logout, etc.

HOWEVER, how good is my private key? Netscape says its servers and clients contain a piece of software called 'Security Module 1', which is FIPS-140 compliant. For example browsers version 4.02 and above include Security Module 1. HOWEVER, my Linux Navigator says my security module is Netscape Internal PKCS#11 Module. This sounds different... Email sent to fips@netscape.com ...

Netscape's FIPS-FAQ states they also obtained FIPS certificates for their DES, 3DES, SHA1 and DSA implementations. Do I have this?

Go to HELP - About Communicator - RSA product embedded: RSA public key support, MD2, MD5, RC2-CBC, RC4. HOWEVER how good is my private key protected? Your key is stored in '/root/.netscape/key3.db'. Your certificates go in '/root/.netscape/cert7.db'. I assume they are protected under the relevant PKCS mechanisms such as PKCS #5 PBE etc.

What if your Netscape seems to hang ("Java starting...")? "ps -a" "kill -s 9 123".

Netscape 4.74 (part of SuSE 7.0)

Basic crypto support of the Communicator provided by SuSE (from the "about" screen): "This version supports U.S. security with RSA Public Key Cryptography, MD2,MD5, RC2-CBC, RC4, DES-CBC, DES-EDE3-CBC". Ciphers for SSL v3 include RC4 with 128 bit and 3DES. Ciphers for S/MIME include RC2 with 128 bit and 3DES. Hence there does not seem to be any need to apply Fortify. Anyway, just for the record, Fortify is provided by SuSE, package fortify-1.4.6-10, from www.fortify.net. Info in /opt/fortify.

Email/calendar clients - evolution - pine - mail

Evolution

Gnome mail/calendar client - Seahorse - ... Config on GrayTiger is described here. at /home/marc/Documents/Alpha/001-C4-Adm/101-C4-huis/100-IT/31 Debian Laptop. Unfortunately it is complicated to access gmail/gmail calendar.

pine

A simple but efficient email client. Ref man pine.

mail

Ref man mail. Use q to quit, p to print a mail message. v

Youtube download

Initial usage

Basics:
  • home: https://youtube-dl.org/
  • requires Python version 2.6, 2.7, or 3.2+ to work except for Windows exe
  • Download from https://github.com/ytdl-org/youtube-dl/blob/master/README.md#readme

Installation: via curl as explained in instructions, plus sudo apt-get install python.

Usage: 'youtube-dl url'.

Version installed April 2023

youtube-dl -v
  • [debug] System config: []
  • [debug] User config: []
  • [debug] Custom config: []
  • [debug] Command-line args: [u'-v']
  • [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
  • [debug] youtube-dl version 2021.12.17
  • [debug] Python version 2.7.18 (CPython) - Linux-5.18.0-0.deb11.3-amd64-x86_64-with-debian-11.4
  • [debug] exe versions: none
  • [debug] Proxy map: {}

Errors

Error: 'unable to extract uploader id'. Discussed at 'https://github.com/ytdl-org/youtube-dl/issues/31530'. Solution: use more recent version.

There are updates at 'https://github.com/ytdl-patched/youtube-dl'. Downloaded one, fails to start ... Python issues.

So how to upgrade? Website https://youtube-dl.org/ says version 2021.12.17 is latest version.

Applications - WWW servers

Ftpd

On Angkor3, installed pure-ftpd. Verify security settings... Documentation in /usr/share/doc/pure-ftpd. Executables in /usr/sbin. (pending question: how about the sftp which comes inside ssh?)
  • Starting: 'service pure-ftpd start' (legacy: /usr/sbin/pure-ftpd &)
  • Legacy stopping: pkill pure-ftpd
Status: 'ps auxw|grep pure-ftpd' should show you the SERVER process and optionally some connected clients. To connect you can eg use 'ipconfig' on the server to list the server's IP-address. Then use eg Filezilla on the client.

Legacy on Angkor2: installed vsftpd via sudo apt-get etc. See ubuntuforums.org. There is a config file in /etc/vsftpd.conf. You start/restart with 'sudo /etc/init.d/vsftpd restart'. Only worked once, apparently then hung. On Angkor2, installed ftpd. No instructions to be found on how to start it, manually starting it as daemon fails, removed again. Tried pure-ftpd - seems OK.

So (O'Reilly's 'Managing Internet information services - p. 54 ) : (1) inetd.conf must fire up a daemon contains the following line: 'ftp stream tcp nowait root /usr/sbin/tcpd in.ftpd -l -a' to listen for incoming ftp. So inetd gives the hand to tcpd, which gives the hand to in.ftpd (which is actually /usr/sbin/in.ftpd, cfr glint).

(2) user ftp must exist
User ftp exists in /etc/passwd (this allows 'anonymous' to connect to your ftpd)

(3) wu-ftpd config files need to be adjusted
(31) /etc/ftpaccess (by default, definitions in here are active)
(32) /etc/ftpconversions
(33) /etc/ftphosts (can deny particular hosts)
(34) /etc/ftpusers ('inverse logic', denies users like root access)

(4) directories on the server side According to 'man ftpd', the server performs a chroot to '/home/ftp'. Here you find bin, dev, etc, pub, directories... . OK, I can 'get' from Bugis. However, to upload, I need an 'upload' statement in my /etc/ftpaccess file.

And I need to allow write access on the /home/ftp/pub directory: I've used a coarse way: chown ftp /home/ftp/pub; chmod a+w /home/ftp/pub. Verify the results with ls -al. You can now ftp to '0' or Kassandra, login as anonymous, cd /pub. If you want to upload, remember:

On Bugis:
- lcd: shows you the local directory, i.e. on Bugis, e.g. /root
- pwd: shows you the remote directory, i.e. on Kassandra.
- cd: changes the remote directory, i.e. on Kassandra.
- ls: lists the remote directory, i.e. on Kassandra.
- put: will write into the remote directory, and you need to have the access rights for that. So typically, you need to cd to it.

Httpd - Apache

Apache - basics

Apache is part of most distributions, you can also checkout "www.apache.org". The main configuration file is called the "ServerConfig" file, e.g. "/etc/httpd/httpd.conf". This is based on the NCSA server and its config file, full details on "www.apache.org/docs".

SuSE 6.0 Apache - starting/stopping - configuration

Use "httpd -?" to find out all the options for starting Apache.

Use the manual command line or the System V Init editor to start/stop Apache. The editor's entry is linked to /etc/rc.d/init.d/apache. This currently starts "/usr/sbin/httpd -f /etc/httpd/httpd.conf -D SSL...". This -f flag specifies the full path to the "ServerConfig (=httpd.conf)" file. After processing ServerConfig, the settings of ResourceConfig and AccessConfig directives determine what happens next. Both directives are included in the ServerConfig file, and are by default commented out. This default results in processing "srm.conf" and "access.conf". Both are by default empty - it is suggested to leave them empty.

Note the role of the ServerRoot directive: if you specify a filename starting with a /, this is absolute. If there's no /, the value of ServerRoot (e.g. /usr/local/httpd) is prepended.

The SuSE 6.0 installation includes Apache 1.3: start it via System V Init editor, and point your browser simply to "localhost". There you are. ServerRoot points to "/usr/local/httpd", so the demo website is served from "/usr/local/httpd/index.html".

Directive DocumentRoot (default "/usr/local/httpd/htdocs") defines where you serve documents from. Further directives will specify authorizations on your documents.

SuSE 6.0 Apache - logging

As specified by directives in ServerRoot, logging goes by default into:
  • "/var/log/httpd.error_log" - directive ErrorLog defines this;
  • "/var/log/httpd.access_log" - directive CustomLog defines this - directive LogLevel (e.g. warn) defines the granularity.

SuSE 6.0 Apache - status

Point your browser to "http://localhost:80/server-status". Setting directive ExtendedStatus on (in ServerConfig, i.e. httpd.conf) gives more info.

SuSE 6.1 Apache on malekh

Some facts:
  • executable: /usr/sbin/httpd (binary)
  • man page: man httpd
  • documentation in "/usr/local/httpd/htdocs/manual/index.html"
  • sample configuration: "/etc/httpd/httpd.conf"
  • adopted configuration: "/Kassandra_Control/conf/httpd.conf": this file contained a lot of "invalid commands" (which used to work on boy), defined by a module not included in the server... so what is included? Running "httpd -l" lists that only two modules are compiled in:
    • http_core.c
    • mod_so.c
    Conclusion: compared to the modules that exist (close to 100) this is limited. However, there is an impressive amount of modules in "/usr/lib/apache/mod_....."
Oddly enought, I get a "sqlinit: DBROOT must be set" when starting. Now "/etc/rc.d/rc3.d/K20apache" starts with "DBROOT=/dev/null". So when should this be set? WHAT IS THE NORMAL WAY TO START APACHE??? WHY DO I NEED sql??? OK, cool:
  • There's a mistake in SuSe's scripts for starting Apache (in fact they forgot it altogether). Got suggested correction from the SuSE Support Database (SDB), and saved it in /Kassandra_Scripts/malekh.apache.correction. Then executed it. Still "DBROOT must be set". Hmhm. OK, my fault: should use "rcapache start" rather than "httpd -f ....".
  • And there's also a specific problem with setting DBROOT, solution now saved in /Malekh/ApacheDBROOTproblem (but this not seem mandatory to fix)

Now "rcapache start" works, but "sh -x rcapache start" shows it uses the standard /etc/httpd/httpd.conf file (rather than my own one). How to fix this? Save a copy of the original, and write my conf file over it. OK, now Netscape can talk to my Apache, and gets a "forbidden". Since the ServerRoot points to /Kassandra_Control, you should surf to e.g. "http://localhost/LinuxWeb.html". Indexes does not seem to be generated automatically.

Apache 1.3.12 on imagine - SuSE 7.0

Documentation in "/usr/share/doc/packages/apache". SuSE manual points to:
  • /usr/local/httpd (example documents)
  • /etc/httpd/httpd.conf (by default configured as SuSE help system)
I saved the .conf into .original, and created a LinuxWeb specific .conf file. Useful:
  • /usr/sbin/rcapache help (for help on how to start/stop Apache)
  • "rcapache full-status" informs you on current status
  • "sh -x rcapache start" (useful for debugging)
Access via Netscape, "http://localhost" or "http://imagine".

Squid - wwwofle

Squid is a cache server/www-proxy. Documentation in /usr/doc/packages/squid. Configuration via "/etc/squid.conf". WWWoffle is a www offline explorer, another proxy server, capable of interacting with e.g. diald. Configuration via "/etc/wwwofle/wwwofle.conf".

htdig

Builds an index over a document-base which is served from a webserver. No manpage for htdig. However, there is a full package and corresponding documentation: "/usr/doc/packages/htdig/htdoc/index.html".

htdig's files: "/opt/www/".

  • /opt/www/htdig/conf/htdig.conf: starturl points to localhost, basedir where htdig resides, commondir where search.html resides etc
  • Apache: /etc/httpd/httpd.conf: ServerRoot (/Kassandra_...), DocumentRoot (?),

How it works:
  • digging - before you can search, htdig (acting as a regular www user) builds the db of all documents that need to be searched, this results in:
    1. a list of words that can be searched for in /opt/www/htdig/db.wordlist
    2. a db of 'hits' in db.docdb
  • merging - htmerge converts the db into something quickly searchable
  • searching - htsearch is a cgi program invoked through html which carries out your searches
Script "rundig (or rundig -vvv to see debug output)" executes both htdig and htmerge. Search via ... TODO: narrow search for htdig via config file.

Ecommerce - minivend

An electronic catalog system (shopping cart). Refer to the article in LJ June 1999. Check-out www.minivend.com.

OpenLDAP

Getting started: via xrpm.
  • "slapd" is the stand-alone ldap server
  • "slurpd" is for replication
  • "ldapsearch" - "ldapadd" - "ldapmodify" ...
  • "ldif"
  • "ldbm"
  • "centipede"
  • KDE's ldap client is "kldap"
Xrpm: /etc/openldap. Info in /usr/share/doc/packages/openldap. Start via "/sbin/init.d/ldap start" or "start -v". "Man ldap" comes with suggestions. Query via ldapsearch. Config in /etc/openldap. Major configfile"/etc/openldap/slapd.conf". You get some feedback in "/var/log/messages". Configuration:
  • schema used: via include in /etc/openldap/slapd.conf: slapd.oc.conf contains all oc (objectclasses)

  • tailoring of "/etc/openldap/slapd.conf" (e.g. loglevel -1 for maximum logging)
  • stop | start slapd by "/sbin/init.d/ldap stop | start"
  • creation of /Kassandra_Data/AdditionalCONFIG/imagine.ldif"
  • execution of "ldapadd -D "cn=Manager,dc=imagine,dc=com" -W -f /Kassandra_Data/AdditionalCONFIG/imagine.ldif" - you can see what's happening in /var/log/messages
  • FYI: dc stands for domainComponent
    1. msg: adding new entry dc=imagine, dc=com
    2. msg: adding new entry cn=Manager,dc=imagine,dc=com
  • check via "ldapsearch -b "dc=imagine,dc=com" "(objectclass=*)"
  • you can now also use kldap (careful: while specifying Root DN you need a space after the "," (so it is "cn=Manager, dc=imagine, dc=com")
DEMO
  • what? ===> xrpm ===> openldap

  • configure via /etc/openldap/slapd.conf
  • object classes are found in slapd.oc.conf
  • >stop | start slapd by "/sbin/init.d/ldap stop | start"
  • check via "ldapsearch -b "dc=imagine,dc=com" "(objectclass=*)" "
  • or via Netscape / addressbook
  • "ldapadd .... imagine3.ldif"
  • checkout via ldapsearch or Netscape Communicator

  • or via /Java51LDAP/src/netscape/ldap/tools/LDAPSearch2.java
  • or via kldap
  • log: /var/log/messages

Applications - cryptography

Key formats PEM etc

In a nutshell

Four different ways to present certificates and their components:
  • PEM - Governed by RFCs, prefered by open-source software. It can have a variety of extensions (.pem, .key, .cer, .cert, more)
  • PKCS7 - Used by Java and supported by Windows. Does not contain private key material.
  • PKCS12 - A Microsoft private standard that was later defined in an RFC that provides enhanced security versus the plain-text PEM format. This can contain private key material. Its used preferentially by Windows systems, and can be freely converted to PEM format through use of openssl.
  • DER - The parent format of PEM. It's useful to think of it as a binary version of the base64-encoded PEM file. Not routinely used very much outside of Windows.

Some detail

Different ways to present certificates and their components:
  • .pem, from Privacy Enhanced Mail, defined in RFCs 1421 through 1424, container format that
    • may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs),
    • may include an entire certificate chain including public key, private key, and root certificates.
    Confusingly, it may also encode a CSR (e.g. as used here) as the PKCS10 format can be translated into PEM.
  • PEM is a failed method for secure email but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys.
  • .key - This is a PEM formatted file containing just the private key of a specific certificate and is merely a conventional name and not a standardized one. In Apache installs, this frequently resides in /etc/ssl/private. The rights on these files are very important, and some programs will refuse to load these certificates if they are set wrong.
  • .pkcs12 .pfx .p12 - Originally defined by RSA, the "12" variant was originally enhanced by Microsoft, and later submitted as RFC 7292. This is a passworded container format that contains both public and private certificate pairs. Unlike .pem files, this container is fully encrypted. Openssl can turn this into a .pem file with both public and private keys: openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes
  • .csr - Certificate Signing Request. The actual format is PKCS10, defined in RFC 2986.It includes some/all of the key details of the requested certificate such as subject, organization, state, whatnot, as well as the public key of the certificate to get signed. The returned certificate is the public certificate (which includes the public key but not the private key), which itself can be in a couple of formats.
  • .der - a way to encode ASN.1 syntax in binary, a .pem file is just a Base64 encoded .der file.
    • Both digital certificates and private keys can be encoded in DER format.
    • OpenSSL can convert these to .pem (openssl x509 -inform der -in to-convert.der -out converted.pem).
    • Windows sees these as Certificate files. By default, Windows will export certificates as .DER formatted files with a different extension. Like on of the following formats:
  • .cert .cer .crt - a .pem (or rarely .der) formatted file with a different extension, one that is recognized by Windows Explorer as a certificate, which .pem is not.
  • .p7b .keystore - Defined in RFC 2315 as PKCS number 7, this is a format used by Windows for certificate interchange. Java understands these natively, and often uses .keystore as an extension instead. Unlike .pem style certificates, this format has a defined way to include certification-path certificates.
  • .crl - certificate revocation list.

Standard Linux goodies

crypt

Crypt is DES based, with a salt added and slower encryption.

factor

This utility factors:

  • according to the man pages: numbers between [-2.147.483.648..2.147.483.648].
  • according to putting it to the test: much higher numbers ... but how high?
According to man factor, you can also generate primes, but that does not seem to work. Way forward: use Cryptix Prime class etc.

md5sum

Calculates the md5 hash over a file: >md5sum filename

sha256sum

Calculates the SHA-256 hash over a file: >sha256sum filename

PGP - GPG

Overview

Configuration: see https://www.gnupg.org/documentation/manuals/gnupg/GPG-Configuration.html

See also local files:
  • /home/marc/.gnupg
  • Indicating:
    • gpg.conf and gpa.conf files
    • keyring (pubring.kbx), private keys, secret keys, CRL, ...
    • trustdb
File formats: https://github.com/chrisboyle/gnupg/blob/de12895acb6cbe9a07c2b680e3b0757b7ac7cb25/agent/keyformat.txt Uses https://en.wikipedia.org/wiki/Canonical_S-expressions because it relies on https://gnupg.org/documentation/manuals/gcrypt/index.html

Commands - CLI

  • GnuPG help: $ gpg -h
  • List Keys
    • List all keys:
    • $ gpg --list-keys
    • $ gpg -k
  • List private keys:
    • $ gpg --list-secret-keys
    • $ gpg -K
  • Delete Key
    • Delete public key: $ gpg --delete-key KEY_ID
    • Delete private key: $ gpg --delete-secret-key KEY_ID
  • Export Key
    • Export public key: $ gpg --export -a KEY_ID
    • Export private key: $ gpg --export-secret-key -a KEY_ID
  • Import Key
    • Import public key: $ gpg --import public.key
    • Import private key: $ gpg --allow-secret-key-import --import private.key

  • Sign: $ gpg -s -b -a --default-key 6CB58EB6976AA756A61196023A24F80BD0386B7F example.file.txt
  • Verify: $ gpg --verify example.file.txt.asc example.file.txt
  • gpg: Signature made Sat Nov 24 12:03:02 2018 CST
  • gpg: using RSA key 6CB58EB6976AA756A61196023A24F80BD0386B7F
  • gpg: Good signature from "HatterJ/L2 (Hatter Jiang's L2 PGP Key) " [ultimate]

PGP - GPG on Windows

On Windows, install gpg4win from gpg4win.org. Info in https://files.gpg4win.org/doc/gpg4win-compendium-en.pdf .

Kassandra - Red Hat - PGP 5.0i

Installation on Kassandra. Downloaded in RPM format from "www.pgpi.com". Mind you, this is the "international" website and correspondig PGPi version. Strength?

Installed the rpm via glint, "utilities/text". Executables go in /usr/bin. Doc goes into /usr/doc. There's also a short man page for pgp, pgp.cfg (configuration), pgpk, etc...

Just running pgpk gives you an overview of key management. For details, refer to the user manual.

Running "man pgp.cfg" describes all the entries in the config file. An example would be nice. A quick peek in the rpm reveals that no default config file is provided. Where's this user manual? O'Reilly has an excellent book! Or there is a user manual in pdf on Win95.

Toothbrush - SuSE 5.3

Installation on Tootbrush. Run "rpm -ivv /Kassandra_Data/AdditionalRPM/pgp____.rpm". This installs binaries into /usr/bin (pgp and pgpk). Starting pgpk results in the message: cannot open configuration file "/root/.pgp/pgp.cfg". Indeed, there's no such file. O'Reilly nicely describes all the fields of this file on p. 271. I created a minimal "pgp.cfg" file. Question: Can I transfer my existing Win95 keyrings and continue to use them here?
Answer: Yes, I copied my pub/secrings into /Kassandra_Data/AdditionalCONFIG. From there, copy pgp.cfg, the pubring.pkr and secring.skr into "/root/.pgp". Now "pgpk -l" lists the content of my keyring, both public and private (PGP: secret) keys.

Question: How do I wipe without a GUI?
Answer: Use the -w flag, e.g. "pgpe -cw ...".

Boy - SuSE 6.0

Reinstalled as described for Toothbrush. Encrypting with a passphrase and "conventional cryptography (what's that, Phil? IDEA? Yes Marc, IDEA)": "pgpe -c foo"results in being asked a passphrase to encrypt with IDEA. Use "pgpv foo" to decrypt. You'll be challenged for the passphrase.

Encrypting with a public key: "pgpe -r marc.sel@be.pwcglobal.com foo"
 

Decrypting again: "pgpv foo" and you'll be challenged for the passphrase.
 

IX.102.4 Malekh - SuSE 6.1

Reinstalled from /Kassandra_Data/AdditionalRPM/... Also copied pgp.cfg and keyrings from /Kassandra_Data/AdditionalCONFIG into /root/.pgp . Now wouldn't it be nice to have a GUI interface? Check out the 'Geheimniss' thing from SuSE.

IX.102.5 Avina - SuSE 6.4

SuSE 6.4 now comes with PGP 2.6.3i. According to the doc, the expects configuration information in your homedirectory, in "/.pgp". Trying to use the old keyring from Malekh - no success. OK, Malekh used PGP 5.Oi, from an additionally downloaded rpm. So I have to remove the "standard" PGP that came with SuSE, and reinstall PGP 5.0i from /Kassandra_Data/Additional_RPM. Hence I removed packages pgp and gpg via YaST. Then installed pgp 5.0i through YaST. This did not work (no pgp executable to be found) but did not return an error msg. Do a manual install "rpm -ivv etc...": failed dependencies:
  1. libm.so.5 is needed by pgp-5.0i-1
  2. libc.so.5 is needed by pgp-5.0i-1
Libaries are found in various places:
  • in /lib: here you find e.g. libc.so.6 and libm.so.6 (i.e. too high)
  • in /usr/i486-linux-lib..., but libc5/6 are empty
  • running "ldconfig -p" shows 552 libs found in cache /etc/ld.so.cache with all mappings
Hence need to install these two libs. Help from SuSE: install package shlibs5 from series a. This resolved the libc.so.5 dependence, rpm still complains for libm.so.5. However, 'ldconfig -p' shows there is '/usr/i486-linux-libc5/lib/libm.so.5', which is a symlink to '....5.0.9'. So what? Alternative: downloaded PGP 6.5.1i. Installed it in /pgp6 and below. Created /root/.pgp containing config & keyfiles. OK. Lot's of documentation, as well as the sources are available now...

imagine - GPG

GPG (Gnu Privacy Guard) is compliant with the OpenPGP implementation proposed in RFC 2440. It does not use any patented algorithms (IDEA, ex-RSA, ...). Symmetric algorithms are: 3DES, Blowfish, CAST5 and Twofish (GnuPG does not yet create Twofish encrypted messages because there is no agreement in the OpenPGP WG on how to use it together with a MDC algorithm)

Digest algorithms available are MD5, RIPEMD160 and SHA1. GPG 1.0 is included in SuSE 7.0 together with "GPGaddons". Documentation is /usr/share/doc/packages/gpg. All files are mentioned at the end of "man gpg".

GPG. User-specific files: /root/.gnupg/... . Remember, previous versions of PGP keyring went in /Kassandra_Data/AdditionalConfig. Import via "gpg --import" (ref below). As a result, the trustdb is created, and those keys for which a "user id" is found (internal PGP/GPG user id I assume) are imported alright. Others are processed but not imported. Use:
  • gpg --version (indicates algorithms supported)
  • gpg --list-keys --with-colons
  • gpg --list-public-keys (initially empty)
  • gpg --list-secret-keys (idem)
  • gpg --import /root/.gnupg/pubring.pkr (imports all pubkeys)
  • gpg --import /root/.gnupg/secring.skr (imports secret keyring)
  • gpg --edit-key marc.sel@be.pwcglobal.com
    • symmetric encryption:
    • gpg -c --cipher-spec 3des --compress-algo 1 myfile (encrypts conventionally)
    • gpg -d --cipher-algo 3des /root/netscape.rpm.gpg > /root/netscape.tst (decrypts)
GPGaddons. Check /usr/share/doc/packages/gpgaddon.

Package and encrypt:
  • To pack: "tar cvfz /temp/ama.tgz foo/ama" (note that foo/fea refers to the entire directory)
  • To gpg: "gpg -c /temp/ama.tgz" (will go to ".gpg" file)
  • ---
  • To un-gpg "gpg -d /temp/ama.tgz.gpg > /tmp/ama.tgz"
  • To unpack: "tar xvfz /temp/ama.tgz" (note that this will recreate everything at the curent location)

GPG-GPA and Seahorse

GPA or Seahorse

GPA (GnuPrivacyAssistant) is a GPG GUI. Seahorse is similar.

GPA installation

marcsel@marcsel:~$ sudo apt-get install gpa Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: gpa 0 upgraded, 1 newly installed, 0 to remove and 15 not upgraded. Need to get 203kB of archives. After this operation, 840kB of additional disk space will be used. Get:1 http://dell-mini.archive.canonical.com hardy/universe gpa 0.7.0-1.1ubuntu1 [203kB] Fetched 203kB in 0s (328kB/s) Selecting previously deselected package gpa. (Reading database ... 100327 files and directories currently installed.) Unpacking gpa (from .../gpa_0.7.0-1.1ubuntu1_lpia.deb) ... Setting up gpa (0.7.0-1.1ubuntu1) ...

GPA usage

Start via Gnome GUI, search for GPA. Rather userfriendly.

With Belgian eID, first thing is to access the card. Refer to Belpic installation instructions.

EDP - Enclave Development Platform

Quote: The Fortanix Rust EDP is the preferred way to write Intel SGX enclaves from scratch. Unquote.

Basics

See EDP's doc states: SGX requires support from the operating system to load enclaves. For this, you need to install and load the SGX driver.

Various installation methods are provided in the Installation guide. If the driver is already installed, make sure it's loaded properly, and you have the appropriate permissions. On Linux, additional debugging information may be available with dmesg or journalctl.

Do '
sudo dmesg -k
' to see your kernel messages.

Do 'lsmod' to see what modules/device drivers are loaded. Intel's website lists drivers which have names such as 'sgx_linux_x64_driver_1.41.bin'. No such driver found in outcome of lsmod.

Installation

See https://edp.fortanix.com/docs/installation/guide/

Step 1 rustup nightly

You need rustup with nightly toolchain, install it as: 'rustup default nightly'. Returns:
info: syncing channel updates for 'nightly-x86_64-unknown-linux-gnu'
info: latest update on 2023-10-13, rust version 1.75.0-nightly (e20cb7702 2023-10-12)
info: downloading component 'cargo'
  7.7 MiB /   7.7 MiB (100 %)   2.8 MiB/s in  2s ETA:  0s
info: downloading component 'clippy'
info: downloading component 'rust-docs'
 14.4 MiB /  14.4 MiB (100 %)   3.1 MiB/s in  4s ETA:  0s
info: downloading component 'rust-std'
 26.5 MiB /  26.5 MiB (100 %)   2.6 MiB/s in 10s ETA:  0s
info: downloading component 'rustc'
 55.6 MiB /  55.6 MiB (100 %)   2.6 MiB/s in 20s ETA:  0s
info: downloading component 'rustfmt'
info: installing component 'cargo'
info: installing component 'clippy'
info: installing component 'rust-docs'
 14.4 MiB /  14.4 MiB (100 %)   6.1 MiB/s in  1s ETA:  0s
info: installing component 'rust-std'
 26.5 MiB /  26.5 MiB (100 %)  17.2 MiB/s in  1s ETA:  0s
info: installing component 'rustc'
 55.6 MiB /  55.6 MiB (100 %)  19.4 MiB/s in  2s ETA:  0s
info: installing component 'rustfmt'
info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu installed - rustc 1.75.0-nightly (e20cb7702 2023-10-12)

Step 2 EDP

Refer to https://edp.fortanix.com/docs/installation/guide/ to install the Fortanix EDP target: 'rustup target add x86_64-fortanix-unknown-sgx --toolchain nightly'. This returns:
info: downloading component 'rust-std' for 'x86_64-fortanix-unknown-sgx'
info: installing component 'rust-std' for 'x86_64-fortanix-unknown-sgx'
 20.2 MiB /  20.2 MiB (100 %)  16.7 MiB/s in  1s ETA:  0s

Step 3 Install SGX driver

Two options: Ubuntu or Linux Driver Package.
Option 1 Ubuntu approach
Enable the Fortanix APT repository and install the intel-sgx-dkms package.
echo "deb https://download.fortanix.com/linux/apt xenial main" | sudo tee -a /etc/apt/sources.list.d/fortanix.list >/dev/null
curl -sSL "https://download.fortanix.com/linux/apt/fortanix.gpg" | sudo -E apt-key add -
sudo apt-get update
sudo apt-get install intel-sgx-dkms
Interim conclusion: installs 'intel-sgx-dkms' package.
Option 2 Linux Driver package
On Intel's website, find the latest “Intel SGX Linux Release” (not “Intel SGX DCAP Linux Release”) and download the “Intel (R) SGX Installers” for your platform. The package will have driver in the name. Points to https://01.org/intel-software-guard-extensions/downloads but this redirects to https://www.intel.com/content/www/us/en/developer/topic-technology/open/overview.html Interim conclusion: where to find this? https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/linux-overview.html?wapkw=intel%20linux gives info. The mainline Linux kernel has had built-in Intel SGX support since release 5.11. The in-kernel Intel SGX driver requires the platform to support and to be configured for flexible launch control (FLC). Use the mainline kernel with Intel SGX support whenever possible. Use 'lsmod | grep sgx' to see if there's sgx modules loaded. Returns: nothing.
  • GrayTiger runs Debian 11 which is '5.18.0-0.bpo.1-amd64'
  • BlackTiger runs Debian 10 which is '5.10.0-0.bpo.5-amd64'
So what is flexible launch control (FLC)? Says https://edp.fortanix.com/docs/installation/help/ :
  • Most Intel CPUs produced after 2018 that have SGX support also have FLC support.
  • To be able to use FLC, the BIOS must enable this functionality on boot. SGX works without FLC, but you won't be able to run production-mode enclaves unless they are signed by an Intel-blessed signing key.
  • To enable FLC, you will need to re-configure your BIOS manually. This of course requires that the BIOS supports SGX. Your BIOS may also call this feature “Unlocked” launch control.
You find drivers at https://download.01.org/intel-sgx/latest/linux-latest/distro/

Step 4 Install Architectural Enclave Service Manager (AESM) service

This provides a protocol to access Intel's architectural enclaves. These enclaves are necessary to launch enclaves (without hardware support for flexible launch control), and to perform EPID remote attestation.

If your platform supports FLC, then you only need to install AESM if you want to use EPID remote attestation.

Various installation methods are provided in Installation guide. If AESM is already installed, make sure it's running properly. AESM requires Internet access to work properly, a proxy may be configured in /etc/aesmd.conf (Linux) or with AESMProxyConfigure.exe (Windows).

Download and run the aesmd image from Docker Hub: 'docker run --detach --restart always --device /dev/isgx --volume /var/run/aesmd:/var/run/aesmd --name aesmd fortanix/aesmd'. Results in:
docker: unknown server OS: .
See 'docker run --help'.
Try with sudo:
sudo docker run --detach --restart always --device /dev/isgx --volume /var/run/aesmd:/var/run/aesmd --name aesmd fortanix/aesmd
Unable to find image 'fortanix/aesmd:latest' locally
latest: Pulling from fortanix/aesmd
40908747a28d: Pull complete 
Digest: sha256:53c3dee56f32922ce807aedc8fe8653ca6c03d8c16991779f2e903d8b408d7a1
Status: Downloaded newer image for fortanix/aesmd:latest
a73e63bdd4ca8685bbbec8a559d305f4c69f93df961fe97267de174230821d27
docker: Error response from daemon: error gathering device information while adding custom device "/dev/isgx": no such file or directory.
Interim conclusion: the AESM service is installed but can't find the SGX device...

Step 5 Install Fortanix EDP utilities

You will need to install the OpenSSL development package and the Protobuf compiler. For example, on Debian/Ubuntu: 'sudo apt-get install pkg-config libssl-dev protobuf-compiler', returns:
sudo apt-get install pkg-config libssl-dev protobuf-compiler
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
pkg-config is already the newest version (0.29.2-1).
pkg-config set to manually installed.
The following additional packages will be installed:
  libprotobuf-dev libprotobuf-lite23 libprotobuf23 libprotoc23 libssl1.1
Suggested packages:
  libssl-doc protobuf-mode-el
The following NEW packages will be installed:
  libprotobuf-dev libprotoc23 libssl-dev protobuf-compiler
The following packages will be upgraded:
  libprotobuf-lite23 libprotobuf23 libssl1.1
3 upgraded, 4 newly installed, 0 to remove and 322 not upgraded.
Need to get 6,632 kB of archives.
After this operation, 23.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://deb.debian.org/debian bullseye/main amd64 libssl1.1 amd64 1.1.1w-0+deb11u1 [1,566 kB]
Get:2 http://deb.debian.org/debian bullseye/main amd64 libprotobuf23 amd64 3.12.4-1+deb11u1 [891 kB]
Get:3 http://deb.debian.org/debian bullseye/main amd64 libprotobuf-lite23 amd64 3.12.4-1+deb11u1 [242 kB]
Get:4 http://deb.debian.org/debian bullseye/main amd64 libprotobuf-dev amd64 3.12.4-1+deb11u1 [1,234 kB]
Get:5 http://deb.debian.org/debian bullseye/main amd64 libprotoc23 amd64 3.12.4-1+deb11u1 [802 kB]
Get:6 http://deb.debian.org/debian bullseye/main amd64 libssl-dev amd64 1.1.1w-0+deb11u1 [1,820 kB]
Get:7 http://deb.debian.org/debian bullseye/main amd64 protobuf-compiler amd64 3.12.4-1+deb11u1 [75.3 kB]
Fetched 6,632 kB in 3s (2,509 kB/s)         
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 628622 files and directories currently installed.)
Preparing to unpack .../libssl1.1_1.1.1w-0+deb11u1_amd64.deb ...
Unpacking libssl1.1:amd64 (1.1.1w-0+deb11u1) over (1.1.1n-0+deb11u3) ...
Setting up libssl1.1:amd64 (1.1.1w-0+deb11u1) ...
(Reading database ... 628622 files and directories currently installed.)
Preparing to unpack .../0-libprotobuf23_3.12.4-1+deb11u1_amd64.deb ...
Unpacking libprotobuf23:amd64 (3.12.4-1+deb11u1) over (3.12.4-1) ...
Preparing to unpack .../1-libprotobuf-lite23_3.12.4-1+deb11u1_amd64.deb ...
Unpacking libprotobuf-lite23:amd64 (3.12.4-1+deb11u1) over (3.12.4-1) ...
Selecting previously unselected package libprotobuf-dev:amd64.
Preparing to unpack .../2-libprotobuf-dev_3.12.4-1+deb11u1_amd64.deb ...
Unpacking libprotobuf-dev:amd64 (3.12.4-1+deb11u1) ...
Selecting previously unselected package libprotoc23:amd64.
Preparing to unpack .../3-libprotoc23_3.12.4-1+deb11u1_amd64.deb ...
Unpacking libprotoc23:amd64 (3.12.4-1+deb11u1) ...
Selecting previously unselected package libssl-dev:amd64.
Preparing to unpack .../4-libssl-dev_1.1.1w-0+deb11u1_amd64.deb ...
Unpacking libssl-dev:amd64 (1.1.1w-0+deb11u1) ...
Selecting previously unselected package protobuf-compiler.
Preparing to unpack .../5-protobuf-compiler_3.12.4-1+deb11u1_amd64.deb ...
Unpacking protobuf-compiler (3.12.4-1+deb11u1) ...
Setting up libprotobuf23:amd64 (3.12.4-1+deb11u1) ...
Setting up libprotobuf-lite23:amd64 (3.12.4-1+deb11u1) ...
Setting up libprotoc23:amd64 (3.12.4-1+deb11u1) ...
Setting up libssl-dev:amd64 (1.1.1w-0+deb11u1) ...
Setting up protobuf-compiler (3.12.4-1+deb11u1) ...
Setting up libprotobuf-dev:amd64 (3.12.4-1+deb11u1) ...
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for libc-bin (2.31-13+deb11u3) ...
Seems OK.

Then, you can use cargo to install the utilities from source: 'cargo install fortanix-sgx-tools sgxs-tools', returns: a really long list, ending with:
Installing /home/marc/.cargo/bin/sgx-detect
  Installing /home/marc/.cargo/bin/sgxs-append
  Installing /home/marc/.cargo/bin/sgxs-build
  Installing /home/marc/.cargo/bin/sgxs-info
  Installing /home/marc/.cargo/bin/sgxs-load
  Installing /home/marc/.cargo/bin/sgxs-sign
   Installed package `sgxs-tools v0.8.6` (executables `sgx-detect`, `sgxs-append`, `sgxs-build`, `sgxs-info`, `sgxs-load`, `sgxs-sign`)
     Summary Successfully installed fortanix-sgx-tools, sgxs-tools!
You can see in /home/marc/.cargo/bin that there are these sgx-* binaries indeed.

Step 6 Configure Cargo integration with Fortanix EDP

Then configure Cargo integration with Fortanix EDP. Configure the Cargo runner for the x86_64-fortanix-unknown-sgx target, so that Cargo knows how to run enclaves after building.

Create .cargo directory with config file in it, in your $HOME directory with the following content:
[target.x86_64-fortanix-unknown-sgx]
runner = "ftxsgx-runner-cargo"

If you already have a .cargo/config file in your $HOME, just append the above content to it. Check: I do not have this file. So I created it, with this contents.

Check SGX setup

Before you start building your application, you must verify that SGX is enabled and all software dependencies are in place. The sgx-detect utility does this for you. Run it by: 'sgx-detect'

If sgx-detect gives positive output, you are good to go. Else, you need to troubleshoot the setup by following Help guide.
 sgx-detect
Detecting SGX, this may take a minute...
SGX instruction set
  CPU support
SGX system software
  SGX kernel device
  libsgx_enclave_common
  AESM service
🕮  SGX instruction set > CPU support
It appears your hardware does not have SGX support.
(run with `--verbose` for more details)
More information: https://edp.fortanix.com/docs/installation/help/#cpu-support
🕮  SGX system software > SGX kernel device
The SGX device (/dev/sgx/enclave, /dev/sgx or /dev/isgx) is not present.
It could not be detected whether you have an SGX driver installed. Please make sure the SGX driver is installed and loaded correctly.
(run with `--verbose` for more details)
More information: https://edp.fortanix.com/docs/installation/help/#sgx-driver
🕮  SGX system software > AESM service
AESM could not be contacted. AESM is needed for launching enclaves and generating attestations.
Please check your AESM installation.
(run with `--verbose` for more details)
More information: https://edp.fortanix.com/docs/installation/help/#aesm-service

Building and running

Dropped since apparently EDP's support for emulation/simulation does not exist in reality.

OP-TEE - Open Source Portable TEE

STMicroelectronics => Linaro => TrustedFirmware.org project.

Basics

OP-TEE architecture

See description at https://optee.readthedocs.io/en/latest/architecture/trusted_applications.html# .

There are two ways to implement Trusted Applications (TAs)
  • User mode TAs are full featured as specified by the GlobalPlatform API TEE specifications, these are the ones people refer to when they saying “Trusted Applications” and in most cases this is the preferred type of TA to write and use.
  • Pseudo Trusted Applications, which are not TAs. A Pseudo TA is an interface, exposed by the OP-TEE Core to its outer world: to secure client Trusted Applications and to non-secure client entities.

User mode TAs are loaded (mapped into memory) by OP-TEE core in the Secure World when something in Rich Execution Environment (REE) wants to talk to that particular application UUID. They run at a lower CPU privilege level than OP-TEE core code. In that respect, they are quite similar to regular applications running in the REE, except that they execute in Secure World.

TAs benefit from the GP TEE Internal Core API specifications. User mode TAs differ by the way they are stored.
  • Plain TAs (user mode) can reside and be loaded from various places. There are three ways currently supported in OP-TEE.
    1. Early TAs are virtually identical to the REE FS TAs, but instead of being loaded from the Normal World file system, they are linked into a special data section in the TEE core blob. Therefore, they are available even before tee-supplicant and the REE’s filesystems have come up. See https://optee.readthedocs.io/en/latest/architecture/trusted_applications.html#early-ta
    2. REE filesystem TAs, which consist of a ELF file, signed and optionally encrypted, named from the UUID of the TA and the suffix .ta. They are built separately from the OP-TEE core boot-time blob, although when they are built they use the same build system, and are signed with the key from the build of the original OP-TEE core blob. Because the TAs are signed and optionally encrypted with scripts/sign_encrypt.py, they are able to be stored in the untrusted REE filesystem, and tee-supplicant will take care of passing them to be checked and loaded by the Secure World OP-TEE core.
    3. Secure Storage TAs which are stored in secure storage. The meta data is stored in a database of all installed TAs and the actual binary is stored encrypted and integrity protected as a separate file in the untrusted REE filesystem (flash). Before these TAs can be loaded they have to be installed first, this is something that can be done during initial deployment or at a later stage. See https://optee.readthedocs.io/en/latest/architecture/trusted_applications.html#secure-storage-ta
A description of loading and preparing a TA for execution is given at https://optee.readthedocs.io/en/latest/architecture/trusted_applications.html#loading-and-preparing-ta-for-execution .

Easiest way to run OP-TEE (in theory)

See https://optee.readthedocs.io/en/latest/faq/faq.html#faq-try-optee which states: running OP-TEE on QEMU on a local PC.

To do that you would need to:
  • Install the OP-TEE Prerequisites.
  • Build OP-TEE for QEMU according to the instructions at QEMU v7/v8.
  • Run xtest.
  • Summarizing the above, you would need to:
    • $ sudo apt-get install [pre-reqs]
    • $ mkdir optee-qemu && cd optee-qemu
    • $ repo init -u https://github.com/OP-TEE/manifest.git
    • $ repo sync
    • $ cd build
    • $ make toolchains -j2
    • $ make run
    • QEMU console: (qemu) c
    • Normal world shell: # xtest

Installing OP-TEE with QEMU - round 1

Main building blocks:
  • OP-TEE, which has the Teaclave TrustZone SDK integrated since OP-TEE Release 3.15.0 (18/Oct/21)
  • QEMU, because the aarch64 Rust examples are built and installed into OP-TEE's default filesystem for QEMUv8
Approach:
  • Do the Quick start with the OP-TEE Repo for QEMUv8, see https://github.com/apache/incubator-teaclave-trustzone-sdk#quick-start-with-the-op-tee-repo-for-qemuv8
    • Build and install OP-TEE with the Teaclave TrustZone SDK, see:
      • https://github.com/apache/incubator-teaclave-trustzone-sdk#build--install, which refers to
      • https://optee.readthedocs.io/en/latest/building/optee_with_rust.html
        • clone the OP-TEE repo first, for QEMUv8, run:
          • mkdir YOUR_OPTEE_DIR && cd YOUR_OPTEE_DIR // OK, /home/marc/OPTEE
          • repo init -u https://github.com/OP-TEE/manifest.git -m qemu_v8.xml // 'repo' commands fail
          • Solution to repo failure: see LTK info
          • python3 /usr/bin/repo init -u https://github.com/OP-TEE/manifest.git -m qemu_v8.xml' - OK 'repo has been initialised in /home/marc/OPTEE'
            • The repo command points to a manifest file https://github.com/OP-TEE/manifest/blob/master/qemu_v8.xml - this is a list of project paths and parameters, details at https://gerrit.googlesource.com/git-repo/+/main/docs/manifest-format.md
          • Then sync it: 'python3 /usr/bin/repo sync' // repo sync is like git clone - OK results in subdirs under /home/marc/OPTEE:
            • build // 'This git contains makefiles etc to build a full OP-TEE developers setup.' Contains .mk files, including qemu_v8.mk
            • buildroot // The 'buildroot' tool for generating a cross-compilation toolchain, there's a /docs/manual folder
            • hafnium // a hypervisor initially supporting aarch64 cpus
            • linux // kernel plus guides for kernel developers (building/running the kernel) - can also be found at https://www.kernel.org/doc/html/latest (so why replicate it here?)
            • mbedtls // Mbed TLS is a C library that implements crypto primitives, X.509 cert manipulation and TLS/ DTLS protocols - documentation in /docs
            • optee_benchmark
            • optee_client
            • optee_examples
            • optee_os // contains the TA-devkit, required for TA makefile
              • Sidebar: description of optee_os and its makefiles is in https://optee.readthedocs.io/en/latest/building/gits/optee_os.html
              • /lib/libutee // a.o. tee_api.c
              • /scripts // .sh and .py scripts
            • optee_rust //
              • source code of Teaclave TrustZone SDK
              • sample Rust programs using the Teaclave TrustZone SDK
            • optee_test
            • qemu
            • trusted-firmware-a
            • u-boot
            • .repo // a local copy of the Internet OPTEE repo where its sources reside, described in ./repo/repo/docs/internal-fs-layout.md
Next steps, generic:
  1. cd build // get into the build subdir
  2. make toolchains -j2 // make 'toolchains' target
  3. make run
On BlackTiger: first 'cd OPTEE', then as described above:
  1. cd build // contains one Makefile and many .mk files, including qemu_v8.mk where Makefile points at
    • See local copy at /home/marc/Documents/Beta/101-SME/125 TEE/OPTEE/Makefile
    • Variables specify compilation of Secure/NonSecure User/Kernel code
    • Includes other makefiles, defines options, variables for TARGET_DEPS, TARGET_CLEAN, ..., targets (clean, all)
    • Then section # ARM Trusted Firmware
    • Section # QEMU with qemu targets
    • Section # U-Boot
    • Section # Linux kernel
    • Section # OP-TEE
    • Section # Hafnium // https://www.trustedfirmware.org/projects/hafnium/ A reference Secure Partition Manager (SPM) for systems that implement the Armv8.4-A Secure-EL2 extension.
    • Section # mkimage - create images to be loaded by U-Boot
    • Section # XEN
    • Section # Run targets - run: all / run-only / check / check-only / check-rust / ...
  2. make toolchains -j2 // make 'toolchains' target, note that there is no target toolchains in Makefile, but Makefile include toolchain.mk which has a target toolchains
    • Downloading arm-gnu-toolchain-11.3.rel1-x86... // OK
  3. make run // run is the first target in Section # Run targets
    • Results in
      • make -C /home/marc/OPTEE/build/../optee_os O=out/arm ... followed by lots of variables
      • entering directory /home/marc/OPTEE/optee_os
      • CHK out/arm/core/... include directory, core directory, tee directory
      • Error:
        make run 
        make -C /home/marc/OPTEE/build/../optee_os O=out/arm CFG_USER_TA_TARGETS=ta_arm64 CFG_ARM64_core=y PLATFORM=vexpress-qemu_armv8a CROSS_COMPILE=" /home/marc/OPTEE/build/../toolchains/aarch64/bin/aarch64-linux-gnu-" CROSS_COMPILE_core=" /home/marc/OPTEE/build/../toolchains/aarch64/bin/aarch64-linux-gnu-" CROSS_COMPILE_ta_arm64=" /home/marc/OPTEE/build/../toolchains/aarch64/bin/aarch64-linux-gnu-" CROSS_COMPILE_ta_arm32=" /home/marc/OPTEE/build/../toolchains/aarch32/bin/arm-linux-gnueabihf-" CFG_TEE_CORE_LOG_LEVEL=3 DEBUG=0 CFG_TEE_BENCHMARK=n CFG_IN_TREE_EARLY_TAS="trusted_keys/f04a0fe7-1f5d-4b9b-abf7-619b85b4ce8c" DEBUG=0 CFG_ARM_GICV3=y 
        make[1]: Entering directory '/home/marc/OPTEE/optee_os'
          CHK     out/arm/conf.mk
          CHK     out/arm/include/generated/conf.h
          CC      out/arm/core/include/generated/.asm-defines.s
          CHK     out/arm/core/include/generated/asm-defines.h
          UPD     out/arm/core/include/generated/asm-defines.h
          CC      out/arm/core/arch/arm/kernel/rpc_io_i2c.o
          CC      out/arm/core/arch/arm/kernel/idle.o
          CC      out/arm/core/arch/arm/kernel/tee_time_arm_cntpct.o
          CC      out/arm/core/arch/arm/kernel/timer_a64.o
          AS      out/arm/core/arch/arm/kernel/spin_lock_a64.o
          AS      out/arm/core/arch/arm/kernel/tlb_helpers_a64.o
          AS      out/arm/core/arch/arm/kernel/cache_helpers_a64.o
          AS      out/arm/core/arch/arm/kernel/thread_a64.o
          CC      out/arm/core/arch/arm/kernel/thread.o
          CC      out/arm/core/arch/arm/kernel/arch_scall.o
        ... long list ...
          CC      out/arm/core/tee/tee_cryp_utl.o
          CC      out/arm/core/tee/tee_cryp_hkdf.o
          CC      out/arm/core/tee/tee_cryp_concat_kdf.o
          CC      out/arm/core/tee/tee_cryp_pbkdf2.o
          CC      out/arm/core/tee/tee_obj.o
          CC      out/arm/core/tee/tee_svc.o
          CC      out/arm/core/tee/tee_svc_cryp.o
          CC      out/arm/core/tee/tee_svc_storage.o
          CC      out/arm/core/tee/tee_time_generic.o
          CC      out/arm/core/tee/tadb.o
          CC      out/arm/core/tee/socket.o
          CC      out/arm/core/tee/tee_ta_enc_manager.o
          CC      out/arm/core/tee/tee_fs_key_manager.o
          CC      out/arm/core/tee/tee_ree_fs.o
          CC      out/arm/core/tee/fs_dirfile.o
          CC      out/arm/core/tee/fs_htree.o
          CC      out/arm/core/tee/tee_fs_rpc.o
          CC      out/arm/core/tee/tee_pobj.o
          CC      out/arm/core/tee/uuid.o
          CC      out/arm/core/tee/tee_supp_plugin_rpc.o
          CC      out/arm/core/tests/ftmn_boot_tests.o
          GEN     out/arm/core/ta_pub_key.c
        Traceback (most recent call last):
          File "scripts/pem_to_pub_c.py", line 71, in <module>
            main()
          File "scripts/pem_to_pub_c.py", line 24, in main
            from cryptography.hazmat.backends import default_backend
        ModuleNotFoundError: No module named 'cryptography'
        make[1]: *** [mk/subdir.mk:185: out/arm/core/ta_pub_key.c] Error 1
        make[1]: Leaving directory '/home/marc/OPTEE/optee_os'
        make: *** [common.mk:550: optee-os-common] Error 2
        
      • Error stating 'pem_to_pub_c.py' (which is a python 3 script found in /OPTEE/optee_os/script)imports 'default_backend' from 'cryptography.hazmat.backends', which is part of the pypi cryptography package
        • Sidebar 1: PYPI, https://pypi.org/, the Python Package Index (PyPI) is a repository of software for the Python programming language.
        • There is PYPI cryptography https://pypi.org/project/cryptography/ which is a package which provides cryptographic recipes and primitives to Python developers. Our goal is for it to be your “cryptographic standard library”. It supports Python 3.7+ and PyPy3 7.3.10+. 'cryptography' includes both high level recipes and low level interfaces to common cryptographic algorithms such as symmetric ciphers, message digests, and key derivation functions.
        • Sidebar 2 PIP https://en.wikipedia.org/wiki/Pip_(package_manager) which may be used to install PYPI cryptography: pip install cryptography. However, BlackTiger: command pip not found. Synaptics shows there is a package python3-cryptography ... install it...
      • Rerun 'make run' - error can't find elftools module - 'please install with sudo apt-get install python3-pyelftools' => Installed it via Synaptics
      • Rerun 'make run' - lot of things get installed in out/arm/... see /home/marc/OPTEE/optee_os/out/arm/core
      • Rerun 'make run' - error: bash: dtc: command not found - seems to be 'device tree compiler' see https://en.wikipedia.org/wiki/Devicetree, Synaptics lets you install device-tree-compiler package
      • Rerun 'make run' - error 'bison not found' - install with Synaptics
      • Rerun 'make run' - error 'flex' not found - Synaptics install
      • Rerun 'make run' - error include/image.h:1395:12: fatal error: openssl/evp.h: No such file or directory - Synaptics install of libssl-dev package
      • Hindsight: see https://optee.readthedocs.io/en/latest/building/prerequisites.html for prerequisites...
      • Rerun 'make run' LONG EXECUTION seems to include use of python scripts to install rust crates, output in
        • /home/marc/OPTEE/out and out-br
        • /home/marc/OPTEE/optee_os/out/arm
      • But long exectution rerun 'make run' - error Ninja not found - Synaptics install of ninja-build
      • But long exectution rerun 'make run' - error pkg-config not found - Synaptics install of pkg-config
      • But long exectution rerun 'make run' - error glib-2.56 not found -
        • https://stackoverflow.com/questions/72650263/glib-2-56-is-required-to-compile-qemu - describes same error
        • Synaptics has package libglib2.0-dev which contains glib-2.58 so slightly newer - nevertheless this is what stackoverflow suggests to install ...
      • Rerun 'make run' - error gthread-2.0 not found - Synaptics install of gthread-2.0? install libsanitizer ... problem gone
      • Rerun 'make run' - error with make when building qemu XXX apparently I stopped here and cd'd into /qemu, made a successful qemu build but I did not return to 'make run' at /build (not /qemu/build)
        • entering /home/marc/OPTEE/qemu/build
        • config-host.mak is out of date, running configure
        • line 3 ./config.status no such file or directory
        • no rule to make target 'config-host.mak', needed by 'Makefile.prereqs', stop. Try 'make toolchains -j2' again, same error
        • Longshot: https://wiki.qemu.org/Hosts/Linux describes how to make qemu on Debian, and specfies additional packages, installed those - same error
        • Longshot: https://wiki.qemu.org/Hosts/Linux describes how build and test ... using 'configure' - see local ./configure info https://thoughtbot.com/blog/the-magic-behind-configure-make-make-install
        • Attempt: cd /OPTEE/qemu then ./configure then cd /OPTEE/build and make run => leads to successful compilation apparently
End of qemu building
make[2]: Leaving directory '/home/marc/OPTEE/qemu/build'
make[1]: Leaving directory '/home/marc/OPTEE/qemu'
/home/marc/OPTEE/build/../toolchains/aarch64/bin/aarch64-linux-gnu-objcopy -O binary \
				-R .note \
				-R .comment \
				-S /home/marc/OPTEE/build/../linux/vmlinux \
				/home/marc/OPTEE/build/../out/bin/linux.bin
/home/marc/OPTEE/build/../u-boot/tools/mkimage -A arm64 \
			-O linux \
			-T kernel \
			-C none \
			-a 0x42200000 \
			-e 0x42200000 \
			-n "Linux kernel" \
			-d /home/marc/OPTEE/build/../out/bin/linux.bin /home/marc/OPTEE/build/../out/bin/uImage
Image Name:   Linux kernel
Created:      Tue Oct 31 14:44:00 2023
Image Type:   AArch64 Linux Kernel Image (uncompressed)
Data Size:    38783488 Bytes = 37874.50 KiB = 36.99 MiB
Load Address: 42200000
Entry Point:  42200000
ln -sf /home/marc/OPTEE/build/../out-br/images/rootfs.cpio.gz /home/marc/OPTEE/build/../out/bin
/home/marc/OPTEE/build/../u-boot/tools/mkimage -A arm64 \
			-T ramdisk \
			-C gzip \
			-a 0x45000000 \
			-e 0x45000000 \
			-n "Root file system" \
			-d /home/marc/OPTEE/build/../out/bin/rootfs.cpio.gz /home/marc/OPTEE/build/../out/bin/rootfs.cpio.uboot
Image Name:   Root file system
Created:      Tue Oct 31 14:44:00 2023
Image Type:   AArch64 Linux RAMDisk Image (gzip compressed)
Data Size:    8831884 Bytes = 8624.89 KiB = 8.42 MiB
Load Address: 45000000
Entry Point:  45000000
make run-only
make[1]: Entering directory '/home/marc/OPTEE/build'
ln -sf /home/marc/OPTEE/build/../out-br/images/rootfs.cpio.gz /home/marc/OPTEE/build/../out/bin/

* QEMU is now waiting to start the execution
* Start execution with either a 'c' followed by  in the QEMU console or
* attach a debugger and continue from there.
*
* To run OP-TEE tests, use the xtest command in the 'Normal World' terminal
* Enter 'xtest -h' for help.

# Option “-x” is deprecated and might be removed in a later version of gnome-terminal.
# Option “-x” is deprecated and might be removed in a later version of gnome-terminal.
# Use “-- ” to terminate the options and put the command line to execute after it.
# Use “-- ” to terminate the options and put the command line to execute after it.
# watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)
# watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)
# unwatch_fast: "/org/gnome/terminal/legacy/" (active: 0, establishing: 1)
# unwatch_fast: "/org/gnome/terminal/legacy/" (active: 0, estaFlying Knowledge Toolboxblishing: 1)
# watch_established: "/org/gnome/terminal/legacy/" (establishing: 0)
# watch_established: "/org/gnome/terminal/legacy/" (establishing: 0)
cd /home/marc/OPTEE/build/../out/bin && /home/marc/OPTEE/build/../qemu/build/aarch64-softmmu/qemu-system-aarch64 \
	-nographic \
	-serial tcp:localhost:54320 -serial tcp:localhost:54321 \
	-smp 2 \
	-s -S -machine virt,acpi=off,secure=on,mte=off,gic-version=3,virtualization=false \
	-cpu max,sme=on,pauth-impdef=on \
	-d unimp -semihosting-config enable=on,target=native \
	-m 1057 \
	-bios bl1.bin		\
	-initrd rootfs.cpio.gz \
	-kernel Image \
	-append 'console=ttyAMA0,38400 keep_bootcon root=/dev/vda2 ' \
	 \
	-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000 -netdev user,id=vmnic -device virtio-net-device,netdev=vmnic
qemu-system-aarch64: -netdev user,id=vmnic: network backend 'user' is not compiled into this binary
make[1]: *** [Makefile:480: run-only] Error 1
make[1]: Leaving directory '/home/marc/OPTEE/build'
make: *** [Makefile:438: run] Error 2

Installing OP-TEE with QEMU round 1 - outcome - xyz

After installation following subdirs are found under /home/marc/OPTEE:
  • build // to build qemu and the OPTEE development system
    • 'This git contains makefiles etc to build a full OP-TEE developers setup.' Contains .mk files, including qemu_v8.mk
    • Contains qemu_v8.mk, with target 'run-only', where you see a.o.
      • the use of /out-br/images/rootfs.cpio.gz as BINARIES_PATH
      • invocation of commands at Qemu start-up, check-terminal, run-help, launch-terminal Normal/Secure world, start up Qemu with parameters (serial ports, -initrd rootfs.cpio.gz, -kernel Image, ...)
  • buildroot // to create cross-compilation toolchains, there's a /docs/manual folder
    • to use it manually, 'make menuconfig', select the target architecture and the packages you wish to compile, run 'make', find the kernel, bootloader, root filesystem, etc. in output/images
    • comes with a basic configuration for man boards, run 'make list-defconfigs' to view them

  • out // the outcome of /OPTEE/build
    • binaries for bootloaders bl1.bin .. bl33.bin, // the boot loader is U-boot
    • linux.bin, Image, rootfs.cpio, uImage, // images are created with mkimage
    • execution logfiles serial0.log and serial1.log
  • out-br // the outcome of /OPTEE/buildroot, recall that the rootfs of Linux contains directories such as /bin (cmds), /boot (kernel images, bootloader, initial ram disk, configs), /dev, /etc/, /lib, /mount, /run, ...
    • Makefile
    • /build
    • /host // looks like Linux host on aarch64, with /bin, /etc/, /lib, ...
    • /images // contains root file system in 3 formats
      • rootfs.cpio // rootfs compressed in cpio format
      • rootfs.cpio.gz
      • rootfs.cpio.tar
    • /per-package
    • /staging
    • /target

  • optee_os // OP-TEE itself (tee.bin), supposed to contain the client library libteec.so (however this is also e.g. in /out-br), a driver optee.ko, ...
    • Sidebar: description of optee_os and its makefiles is in https://optee.readthedocs.io/en/latest/building/gits/optee_os.html
    • Sidebar: contains the TA-devkit (see which is exactly what?), required for TA makefile
    • /core including /arch, crypto, drivers, include, kernels, tee, tests, core.mk, ...
    • /keys including default.pem and default.ta.pem
    • /ldelf including .c .h and .S files
    • /lib including /libutee // a.o. tee_api.c
    • /mk including .mk files
    • /out including /arm/core which includes tee.bin, the TEE itself - further includes 3 boot image files to be loaded in the platform boot media (? your find images in /out-br/images)
    • /scripts // .sh and .py scripts
    • /ta including /arch, /mk (with ta_dev_kit.mk), ...
    • /.git
    • /.github
  • optee_client // to generate non-secure services for the OP-TEE OS, files generated from optee_client build are stored in the embedded filesystem
  • trusted-firmware-a // secure world software for Armv7/8 -A
    • TF-A project - reference implementation of secure world software for Armv7-A and Armv8-A class processors
    • Trusted Firmware-A (TF-A) implements a subset of the Trusted Board Boot Requirements (TBBR) Platform Design Document (PDD) for Arm reference platforms, see https://trustedfirmware-a.readthedocs.io/en/v2.8/design/firmware-design.html
    • Refer also to TF-A info.
  • u-boot // boot loader
    • source code for U-Boot, a boot loader for embedded boards based on ARM, PowerPC, MIPS and others, see local u-boot description
    • .
    • contains Makefile with targets such as mrproper, clean, ...
  • qemu // a version with TrustZone support
  • .repo // a local copy of the Internet OPTEE repo where its sources reside, described in ./repo/repo/docs/internal-fs-layout.md
  • optee_examples // including hello_world
  • optee_test // a treasure trove of tests including the xtest regression tests, with source code for host and TA
  • optee_benchmark
  • hafnium // a hypervisor initially supporting aarch64 cpus
  • linux // kernel plus guides for kernel developers (building/running the kernel) - can also be found at https://www.kernel.org/doc/html/latest (so why replicate it here?)
  • mbedtls // Mbed TLS is a C library that implements crypto primitives, X.509 cert manipulation and TLS/ DTLS protocols - documentation in /docs
  • optee_rust //
    • source code of Teaclave TrustZone SDK
      • OPTEE has the Teaclave TrustZone SDK integrated, Teaclave is 'universal secure computing platform', with components such as Teaclave SGX SDK, Teaclave Java TEE SDK, and Teaclave TrustZone SDK which can be used separately, see local Teaclave info info
    • sample Rust programs using the Teaclave TrustZone SDK

Running Qemu on BlackTiger

Running Qemu basics.

Approach 1 direct call. As qemu resides in /bin:
  • $ qemu-system-x86_64 // starts and returns 'no bootable system'
  • $ qemu-system-x86_64 -h // help info, including use ctl-alt-f ctl-alt-n, ... including info on -netdev user,
  • $ qemu-system-x86_64 -L pc-bios // starts and returns 'no bootable system'
  • $ qemu-system-x86_64 /home/marc/OPTEE/out-br/images/rootfs.cpio
    • Warning: Image format was not specified, guessing raw...
    • Starts separate QEMU window, complain about no bootable device, system hang

  • Load and boot Freedos
    • Download a freedos (from https://www.freedos.org/download/) into /Downloads/FreeDos and unzip. Gives FD13BOOT.img and FD13LIVE.iso.
    • cd /home/marc/Downloads/FreeDos
    • qemu-img create image.img 200M // create the VDI, returns: 'formatting'
    • file image.img // returns 'image.img: data'
    • qemu-system-i386 -hda image.img -cdrom FD13LIVE.iso -m 16M -boot order=dc // start qemu with the disk, providing a DOS prompt
More background: How to test system emulation is working: https://wiki.qemu.org/Testing#System_emulation

Approach 2 through 'make run-only'. Or 'make run'. It can be seen that the makefile's target 'run-only' on line 473 does the actual invocation of qemu :
	cd $(BINARIES_PATH) && $(QEMU_BUILD)/aarch64-softmmu/qemu-system-aarch64 \
Use cntl-alt-F1 / F2 etc to connect to consoles, or 'nc -l 128.0.0.1 54320'. Also: once Qemu is launched, start the emulated guest by typing c.

Running xtest, see also optee_test

Regarding xtests: It’s important to understand that you run xtest on the device itself, i.e., this is nothing that you run on the machine. So I need to boot Qemu with a valid image. Which images are available by default? Check /out-br: /out-br/images/rootfs.cpio. Try booting these ... ref above.

The optee_test.git contains the source code for the TEE sanity test suite in Linux using the ARM TrustZone technology. By default there are several thousands of tests when running the code that is in the git only. However, it is also possible to incorporate tests coming from GlobalPlatform. We typically refer to these to as:
  • Standard tests: included in optee_test. They are free and open source.
  • Extended tests: written by GlobalPlatform, not open source and not freely available (it’s free to members of GP and can otherwise be purchased).
More info: https://github.com/OP-TEE/optee_test

Running OPTEE using Qemu

Described at https://optee.readthedocs.io/en/latest/building/devices/qemu.html#

Do 'make run' which should take you to the QEMU console. It will also spawn two UART consoles. One console containing the UART for secure world and one console containing the UART for normal world. You will see that it stops waiting for input on the QEMU console. To continue, do: '(qemu) c'

However, 'make run' results in: qemu-system-aarch64: -netdev user,id=vmnic: network backend 'user' is not compiled into this binary. It seems qemu complains about -netdev user, id=vmnic.

Qemu networking doc is at
  • https://wiki.qemu.org/Documentation/Networking
  • https://en.wikibooks.org/wiki/QEMU/Networking

Installing OP-TEE with QEMU round 2 - abc

Start at https://optee.readthedocs.io/en/latest/building/index.html

Step 0 specify outcome and configure OPTEE

Specify what desired outcome is: building OPTEE including all its components in such a way we can develop ARM TZ host/TA programs for execution on an emulated platform, the Qemu ARM V8 emulator. Qemu needs to be provided with host/TA application, a Linux OS, trusted firmware TF-A and u-boot boot loader.

  • Build the appropriate toolchain, using make with configuration variables set correctly
  • Build OPTEE and its components using make with configuration variables set correctly - Step 2 and subsequent steps
  • Build and execute the host/TA application - hello_world application, using the TA Dev Kit
Sidebar: configuration
Config see https://optee.readthedocs.io/en/latest/architecture/porting_guidelines.html?highlight=listing%20variables
  • Online you find e.g.
    • ARM: https://github.com/OP-TEE/optee_os/tree/master/core/arch/arm
    • Sample: platform hikey: https://github.com/OP-TEE/optee_os/blob/master/core/arch/arm/plat-hikey/conf.mk
  • Local: in /core/arch/arm ... hence eg in /OPTEE/optee_os/core/arch/arm/plat-... but Qemu is no platform...
  • Local: in /core/arch/arm ... hence eg in /OPTEE/optee_os/core/arch/arm/cpu you find .mk files for cortex-a5.mk, a7, a9, a15 and cortex-armv8-0.mk
    • here you find CFG_HWSUPP_... and CFG_ENABLE_..., and arm32-plat
  • Local: in /home/marc/OPTEE/out-br/.config there are variables defined
    • BR2_ROOTFS_OVERLAY="/home/marc/OPTEE/build/../build/br-ext/board/qemu/overlay"
    • BR2_ROOTFS_POST_BUILD_SCRIPT="/home/marc/OPTEE/build/../build/br-ext/board/qemu/post-build.sh"
    • BR2_PACKAGE_QEMU_ARCH_SUPPORTS_TARGET=y
    • # BR2_PACKAGE_QEMU is not set
    • BR2_PACKAGE_HOST_QEMU_ARCH_SUPPORTS=y
    • BR2_PACKAGE_HOST_QEMU_SYSTEM_ARCH_SUPPORTS=y
    • BR2_PACKAGE_HOST_QEMU_USER_ARCH_SUPPORTS=y
    • # BR2_PACKAGE_HOST_QEMU is not set
More .config files are loaded through the include files.

Step 1 Prerequisites

Specified at https://optee.readthedocs.io/en/latest/building/prerequisites.html#prerequisites.

OK, except python-is-python3 (not in Synaptics), and repo was already installed in /usr/bin. Just executing 'repo' it gives error unless you are at a directory where you already did a repo init, this is normal.

Step 2 install repo command

OK, ref above in LTK.

Step 3 get the source code

OK, ref above in LTK, using repo init with the qemuv8 manifest.

Step 4 build toolchains - make toolchains

So $ cd /OPTEE/build, and
  • observe there is a build makefile
    • with variables such as OPTEE_OS_PLATFORM = vexpress-qemu_armv8, with paths, ...
    • with load and entry addresses for u-boot, section on qemu, section on u-boot, ...
    • with targets such as:
      • all (referring to $(TARGET_DEPS), which consists of arm-tf,, buildroot, linux, optee-os and qemu
      • clean (referring to $(TARGET_CLEAN), which consists of cleaning arm-tf,, buildroot, linux, optee-os and qemu
      • qemu (referring to $(QEMU_BUILD), which builds qemu
      • u-boot and u-boot-clean
      • uimage
      • linux and linux-clean
      • optee-os which is referred to as OPTEE as well
      • hafnium
      • xen
      • run which does all and the run-only
      • run-only which sets the binaries path to /out-br/images/rootfs.cpio.gz, then calls check-terminal, run-help, and launches normal and secure world terminals, then starts qemu with lot of parameters
      • check which starts with CHECK_DEPS, the exports lot of QEMU variables, then 'expect' command - NOT ENTIRELY CLEAR
      • check-only, check-rust, check-only-rust
    • with include statements:
      • include common.mk // found in /OPTEE/build,
        • common definitions, variables, macros, configuration, ...,
        • has a buildroot section, buildroot target (creates /out-br), buildroot-clean target, linux targets, ...
        • qemu configuration definitions for random, io, fs,
        • also gnome-terminal, konsole and xterm definitions
        • OPTEE definitions
        • OPTEE Rust definitions
      • include toolchain.mk
        • definitions
        • target toolchains
  • execute make -d toolchains // or alternatively make -d toolchains > 20231120-optee-build-make-1-toolchains.logfile.txt
    • Verbosity: use -d option
    • Restart after first cleaning up: the makefile toolchain.mk does not have a target to clean up - drop how to clean up for time being
  • LEGACY: execute make -j2 toolchains // drop -j2 for more clarity in logfile
    • there is no target toolchains in /OPTEE/build/makefile but it is included
  • output

Step 5 build project - make

Doc states: 'We’ve configured our repo manifests, so that repo will always automatically symlink the Makefile to the correct device specific makefile, that means that you simply start the build by running (still in /build): $ make -j `nproc` '.

Tips:
  • avoid the -j flag so it’s easier to see in what order things are happening.
  • To create a build.log file do: $ make 2>&1 | tee build.log
  • Don't be confused by the check target, which is related to the run targets, not to the build targets
Build:
  • $ cd /OPTEE/build
  • execute make -d clean // to clean up object and exec files from previous runs, ok
  • execute make -d > optee-build-make.logfile.txt // STALLS because there are user inputs sollicited
  • execute make -d // ok

Step 6 Flash the device

You shouldn’t do this step when you are running QEMU-v7/v8 and FVP. DROP.

Step 7 Boot the device

This is device specific, see https://optee.readthedocs.io/en/latest/building/devices/index.html#device-specific for qemu.

This tells you

  • Execute make run // results in
    • '(qemu)' prompt
      • tells you 'QEMU is now waiting to start execution, either c in QEMU console, or attach a debugger and continue from there', 'to run xtest use the 'Normal World'
      • you can enter 'c' for continue
    • terminal Secure World
    • terminal Normal World
      • case 1 with linux, 'Welcome to Buildroot, type root or test to login', login gives you basic working linux
        • Run 'help' to find out what is possible. Shell is hushshell. Run 'showvar' to see variables. Many low-level commands such as tpm.
      • case 2 u-boot prompt '=>' - seems an error, why does it stop here, and then how to boot? coninfo cmd lists console devices and info, , load and fatload command, kernel can be booted with bootm cmd, source cmd to execute a script, run cmd, bdinfo cmd, see slides
      • => version
        U-Boot 2023.07.02 (Nov 20 2023 - 17:01:32 +0100)
        
        aarch64-linux-gnu-gcc (Arm GNU Toolchain 11.3.Rel1) 11.3.1 20220712
        GNU ld (Arm GNU Toolchain 11.3.Rel1) 2.38.20220708
        
        => printenv
        arch=arm
        baudrate=115200
        board=qemu-arm
        board_name=qemu-arm
        boot_targets=qfw usb scsi virtio nvme dhcp
        bootcmd=setenv kernel_addr_r 0x42200000 && setenv ramdisk_addr_r 0x45000000 && load hostfs - ${kernel_addr_r} uImage && load hostfs - ${ramdisk_addr_r} rootfs.cpio.uboot &&  setenv bootargs console=ttyAMA0,115200 earlyprintk=serial,ttyAMA0,115200 root=/dev/ram && bootm ${kernel_addr_r} ${ramdisk_addr_r} ${fdt_addr}
        bootdelay=2
        cpu=armv8
        ethaddr=52:54:00:12:34:56
        fdt_addr=0x40000000
        fdt_high=0xffffffff
        fdtcontroladdr=8099f9f0
        initrd_high=0xffffffff
        kernel_addr_r=0x40400000
        loadaddr=0x40200000
        pxefile_addr_r=0x40300000
        ramdisk_addr_r=0x44000000
        scriptaddr=0x40200000
        stderr=pl011@9000000
        stdin=pl011@9000000
        stdout=pl011@9000000
        vendor=emulation
        
        Environment size: 754/262140 bytes
        

    Step 8 Load tee-supplicant

    Doc states: 'On most solutions tee-supplicant is already running (check by running $ ps aux | grep tee-supplicant) on others not. If it’s not running, then start it by running: $ tee-supplicant -d . '

    Running where? Open another BlackTiger terminal, do $ ps aux | grep tee-supplicant to find out it is running indeed.

    OK.

    Step 9 Run xtest

    See https://optee.readthedocs.io/en/latest/building/gits/build.html?highlight=step%208#step-9-run-xtest

    To run xtest,
    • if not yet done, ensure you're in /OPTEE/build directory and do 'make run'
    • at the (qemu) prompt, enter 'c'
    • connect to the Normal World' terminal and do 'xtest -h' (for help), or just 'xtest'
      • Observation: 'unknown command xtest' appears if normal world has not correctly booted and you're not logged-in to linux e.g. as user test
      • Observation: enter version and you're informed this is U-Boot 2023.07.02 with aarch64-linux-gnu-gcc and GNU ld - seems runtime error, just start all over from 'make run'
      • Observation: if you forget the 'c' at the qemu prompt, normal world terminal says 'listening' and 'soc_term accepted/read/accepted' but one cannot enter xtest - you need to 'c'
    OK ... move on with hello_world example.

    Hello_world.c for OPTEE

    Hello_world sources

    Application is split in host and TA part
    • on-line sources are found at https://github.com/linaro-swg/optee_examples/tree/master/hello_world
    • local copies:
      • /OPTEE/optee_examples/hello_world/host/main.c
      • /OPTEE/optee_examples/hello_world/ta
    Status of Hello_world:
    • at https://github.com/linaro-swg/hello_world it is described as deprecated
    • new version is found at https://github.com/linaro-swg/optee_examples

    Hello_world building and running

    Info:
    • Hello_ world is briefly described at optee_examples.
    • Makefile example at https://github.com/linaro-swg/optee_examples/blob/master/hello_world/Makefile

    Building the host application - strategy

    There are instructions at two locations:
    • The two ways to build it are at build-instructions
      • You can build the code in this git only or build it as part of the entire system, i.e. as a part of a full OP-TEE developer setup. For the latter, please refer to instructions at the build page. For standalone builds we currently support building with both CMake as well as with regular GNU Makefiles. However, since the both the host and the Trusted Applications have dependencies to files in optee_client (libteec.so and headers) as well as optee_os (TA-devkit), one must first build those and then refer to various files.
      • This means hello_world should already have been build? Then find it/run it ...
        • How is hello_world built? Unclear...
          • The OPTEE/build makefile does not include a target to make examples (nor anything else related)
          • The first include file common.mk contains some variables for the path to the examples and similar
          • The second include file toolchain.mk does not include anything related to examples
          • => interim conclusion: examples such as hello_world are not included in the build of the entire system ...
          • => However, this is WRONG, because it can be execute, see below
        • Can you run it? Doc states 'With this you end up with a binary optee_example_hello_world in the host folder where you did the build.' Not sure how to interpret it... but hello_world can be run
          • $ cd /OPTEE/build
          • $ make run
          • (qemu) 'c'
          • Normal World: login to Buildroot // login as root
          • Normal World: optee_example_hello_world // hello_world runs fine both in Secure and Normal World
      • This means those references should be given for all newly created programs? Seems logical...

    • How to build it stand-alone is at build-using-gnu-make
      • You need things such as defining TEEC_EXPORT=/out/export/usr

    Other examples for OPTEE - C language

    Listed at https://github.com/linaro-swg/optee_examples, all in C.
    • acipher - C
    • AES - C
    • hotp - C
    • plugins - C
    • random - C
    • secure_storage - C

    Compiling Rust examples for OP-TEE with QEMU (1) OPTEE information

    To differentiate from client optee_examples in .c , Rust examples are not prefixed with optee_example_ but suffixed with -rs.

    Overview of Rust examples

    Building and running hello_world-rs

    See https://optee.readthedocs.io/en/latest/building/optee_with_rust.html, stating as follows.
    • Rust example applications are located in optee_rust/examples/. To build and install them with Buildroot, run: $ (cd build && make toolchains && make OPTEE_RUST_ENABLE=y CFG_TEE_RAM_VA_SIZE=0x00300000)
      • I.e. recreate the toolchains, then use the top-level /OPTEE/build make without a target (hence first target, all), with two additional variables set (enable Rust, enlarge TEE RAM size)
    • Analysis of building:
      • Question: what working directory to start from? Obviously /OPTEE/build, the top level - see the '$ cd build'
      • Then 'make toolchains' seems not strictly necessary since we've done that - Question: why do they include it here? To be on the safe side of having the right toolchain probably...
      • Then the 'make with two variables' seems what is needed to make the Rust examples...
    • Actual building:
      • BlackTiger, run: $ (cd build && make toolchains && make OPTEE_RUST_ENABLE=y CFG_TEE_RAM_VA_SIZE=0x00300000)
      • ok, see logfile (partial
      • WHY and HOW DOES THIS ACTUALLY GET BUILT?
        • What is the impact of OPTEE_RUST_ENABLE=y // taken into account in common makefile p.5
          • where is it used? $ grep -r 'OPTEE_RUST_ENABLE' /home/marc/optee
          • grep -r 'OPTEE_RUST_ENABLE' /home/marc/OPTEE
            /home/marc/OPTEE/build/common.mk:OPTEE_RUST_ENABLE ?= n    // here it is set
            /home/marc/OPTEE/build/common.mk:ifeq ($(OPTEE_RUST_ENABLE),y)  // here it is tested
            /home/marc/OPTEE/build/common.mk:ifeq ($(OPTEE_RUST_ENABLE),y)  // here it is tested
            /home/marc/OPTEE/build/20231115-OUTPUT-make-j2-v_toolchains.txt:OPTEE_RUST_ENABLE = n  // mirrored in logfile
            /home/marc/OPTEE/out-br/build/optee_rust_examples_ext-1.0/examples/signature_verification-rs/ta/target/aarch64-unknown-optee-trustzone/release/build/ring-fd13047b24a47bc6/output:OPTEE_RUST_ENABLE: y
            /home/marc/OPTEE/out-br/build/optee_rust_examples_ext-1.0/examples/tls_server-rs/ta/target/aarch64-unknown-optee-trustzone/release/build/ring-4cf077b93131453a/output:OPTEE_RUST_ENABLE: y
            /home/marc/OPTEE/out-br/build/optee_rust_examples_ext-1.0/examples/tls_client-rs/ta/target/aarch64-unknown-optee-trustzone/release/build/ring-4cf077b93131453a/output:OPTEE_RUST_ENABLE: y
            /home/marc/OPTEE/out-br/build/optee_rust_examples_ext-1.0/.github/workflows/ci.yml:          make CFG_TEE_CORE_LOG_LEVEL=0 OPTEE_RUST_ENABLE=y CFG_TEE_RAM_VA_SIZE=0x00300000 check-rust
            /home/marc/OPTEE/optee_rust/.github/workflows/ci.yml:          make CFG_TEE_CORE_LOG_LEVEL=0 OPTEE_RUST_ENABLE=y CFG_TEE_RAM_VA_SIZE=0x00300000 check-rust    // make, embedded in optee_rust 
            /home/marc/OPTEE/optee_os/.github/workflows/ci.yml:          make -j$(nproc) OPTEE_RUST_ENABLE=y check-rust   // same
            
          • what does it change?
    • Analysis of running:
      • During the build process,
        • host applications are installed to /usr/bin/ // normal world => i.e. in the Linux created by buildroot // VERIFY ON BLACKTIGER xxxxxxxxxxxxxxxxxxxxx
        • TAs are installed to /lib/optee_armtz/ // secure world => i.e. in the Linux created by buildroot // VERIFY ON BLACKTIGER xxxxxxxxxxxxxxxxxxxxx
        • After QEMU boots up, you can run host applications in normal world terminal.
        • For example: $ hello_world-rs. // I assume /usr/bin is the Linux in normal world // VERIFY ON BLACKTIGER xxxxxxxxxxxxxxxxxxxxx
      • Then start QEMUv8: $ (cd build && make run-only)
    • Actual running:
      • BlackTiger:
        • (cd build && make run-only)
        • (qemu) 'c'
        • Normal world: login root
        • Normal world: $ hello_world-rs // code in the normal world calls code in the secure world
        • ok, see logfiles

    Building and running your own Rust program

    Recall that
    • For each example the sources, makefiles etc. at GrayTiger local copy
      • The makefile covers both the host and the TA
        • all:
        • @make -s -C host // -s means 'silent', -C means 'change to directory'
        • @make -s -C ta
    • Hello_world-rs at GrayTiger local copy
    So how do you build and run a single host/TA combination? * in theory ** teaclave doc (below) says: make -C examples/[YOUR_APPLICATION] * on BlackTiger

    Teaclave crates (libraries)

    • Crate teaclave_client_sdk at apache.teaclave
    • Teaclave TZ SDK for host: crate optee_teec, documented at apache.teaclave for context, error, operation, session, ...
    • Teaclave TZ SDK for TA: crate optee_utee, documented at apache.teaclave with modules (arithmetical, crypto_op, extension, net, object, time, trace, uuid), structs, macros, ...
    • List of all crates for applications at apache.teaclave
    • List of all crates for enclaves at apache.teaclave

    Adding your own crates

    Adding libraries to hello_world: see https://github.com/OP-TEE/optee_os/issues/5278 - this describes how to add SSL C-libraries from mbedtls.

    LEGACY - Strategy

  • Question: Does this apply: 'code in this git only' approach, at build-using-gnu-make?
    • Helicopterview: build Rust applications (using OP-TEE's TA-devkit), copy them to target fs, and run them, see https://github.com/apache/incubator-teaclave-trustzone-sdk#run-rust-applications
    • Using Rust with OP-TEE see https://optee.readthedocs.io/en/latest/building/optee_with_rust.html#op-tee-with-rust
    • Compile the Rust examples with OP-TEE see https://optee.readthedocs.io/en/latest/building/optee_with_rust.html#compile-rust-examples

    Compiling Rust examples for OP-TEE with QEMU (2) Teaclave information

    Refer to teaclave inclubator on github.
    • Describes how to make the examples
    • Describes how to make your own application: make -C examples/[YOUR_APPLICATION] xxxxxxxxxxxxx RELEVANT xxxxxxxxxxxxxxxx
    • Describes how to enable Qemu's VirtFS for sharing host applications and TA with the Qemu guest system.
    • Points to OPTEE doc for running.

    Starting and debugging Rust for OP-TEE with QEMU

    OPTEE debugging: https://teaclave.apache.org/trustzone-sdk-docs/debugging-optee-ta.md/ Sidebar: refer to LTK QEMU description for information on starting and debugging QEMU.

    Writing a TA - sidebar

    Basics

    • Intro at https://optee.readthedocs.io/en/latest/building/trusted_applications.html
    • Examples at https://optee.readthedocs.io/en/latest/building/gits/optee_examples/optee_examples.html#optee-examples

    Makefile for a TA

    Must be written to rely on OP-TEE TA-devkit resources (to successfully build the target application). TA-devkit is built when one builds optee_os.

    To build a TA, one must provide:
    • Makefile, a make file that should set some configuration variables and include the TA-devkit make file
    • sub.mk, a make file that lists the sources to build (local source files, subdirectories to parse, source file specific build directives)
    • user_ta_header_defines.h, a specific ANSI-C header file to define most of the TA properties
    • An implementation of at least the TA entry points, as extern functions: TA_CreateEntryPoint(), TA_DestroyEntryPoint(), TA_OpenSessionEntryPoint(), TA_CloseSessionEntryPoint(), TA_InvokeCommandEntryPoint()
    Makefile description is provided at https://optee.readthedocs.io/en/latest/building/trusted_applications.html#ta-makefile-basics .

    OP-TEE for Fabric chaincode - sidebar

    Sidebar https://github.com/piachristel/open-source-fabric-optee-chaincode

    OpenSSL

    Basics

    The OpenSSL toolkit essentially includes:
    • libssl.a: the SSLv2, SSLv3 and TLSv1 code;
    • libcrypto.a: general crypto & X.509v1/v3 routines;
    • openssl: a command line tool.
    Documentation:
    • /openssl-0.9.6g/README
    • Check out the help of the "openssl" binary, by "cd /usr/local/ssl/bin" - "./openssl" - "help"
    • Checkout "/openssl-0.9.6b/docs"
    • www.openssl.org ===> downloaded 3 helpfiles: /openssl-0.9.2b/$AddHelp*.*
    See also the cookbook.

    You can test an SSL server with the ssltest.

    Displaying certs and keys

    Use openssl x509 -in filename.

    • Command: openssl x509 -in NID-AniekHannink.cer
    • Result: unable to load certificate 140518461707584:error:0909006C:PEM routines:get_name:no start line:../crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE
    Seems to expect a PEM format...

    Question: How to find out if your certificate is in PEM, DER, or pkcs12 format?

    Answer: Open the certificate using a text editor like Notepad and see if it is in text format.

    Example:

    -----BEGIN CERTIFICATE----- MIIDijCCAvOgAwIBAgIJAKRvtQxONVZoMA0GCSqGSIb3DQEBBAUAMIGLMQswCQYD VQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTESMBAGA1UEBxMJU3Vubnl2YWxl MSAwHgYDVQQKExdBcnViYSBXaXJlbGVzcyBOZXR3b3JrczEMMAoGA1UECxMDVEFD MSMwIQYDVQQDExpteXNlcnZlci5hcnViYW5ldHdvcmtzLmNvbTAeFw0wODA0MzAy MzM3MDJaFw0xMDA0MzAyMzM3MDJaMIGLMQswCQYDVQQGEwJVUzETMBEGA1UECBMK Q2FsaWZvcm5pYTESMBAGA1UEBxMJU3Vubnl2YWxlMSAwHgYDVQQKExdBcnViYSBX aXJlbGVzcyBOZXR3b3JrczEMMAoGA1UECxMDVEFDMSMwIQYDVQQDExpteXNlcnZl ci5hcnViYW5ldHdvcmtzLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA zRwqc9prVXycGhHcsAjGPzC2MKU4DhXSr86Z89Jk8/cXEJBJ0C/NgdAqqDgxneUh nVyxGxODa7BNGAWSagdCsKLrbkchr479E3xLfgdc3UzAJITLGCXGiQ66NwQDyM5I G/xKYm4oqgyOE/lFTTkK0M8V0NmmJynyOCYC/AwQKjMCAwEAAaOB8zCB8DAdBgNV HQ4EFgQUM5btT6IlPGkLTTPvFccTVURO1p0wgcAGA1UdIwSBuDCBtYAUM5btT6Il PGkLTTPvFccTVURO1p2hgZGkgY4wgYsxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpD YWxpZm9ybmlhMRIwEAYDVQQHEwlTdW5ueXZhbGUxIDAeBgNVBAoTF0FydWJhIFdp cmVsZXNzIE5ldHdvcmtzMQwwCgYDVQQLEwNUQUMxIzAhBgNVBAMTGm15c2VydmVy LmFydWJhbmV0d29ya3MuY29tggkApG+1DE41VmgwDAYDVR0TBAUwAwEB/zANBgkq hkiG9w0BAQQFAAOBgQBp71WeF6dKvqUSO1JFsVhBeUesbEgx9+tx6eP328uL0oSC fQ6EaiXZVbrQt+PMqG0F80+4wxVXug9EW5Ob9M/opaCGI+cgtpLCwSf6CjsmAcUc b6EjG/l4HW2BztYJfx15pk51M49TYS7okDKWYRT10y65xcyQdfUKvfDC1k5P9Q== -----END CERTIFICATE-----

    If the certificate is in text format, then it is in PEM format. You can read its contents by openssl x509 -in cert.crt -text

    If the file content is binary, the certificate could be either DER or pkcs12/pfx. To find out which format, run

    To open a DER certificate: openssl x509 -in cert.crt -inform DER -text

    To display pkcs12 certificate information: openssl pkcs12 -in cert.crt -info

    RSA key generation

    Change to "/usr/local/ssl/bin". Run "./openssl" puts you in commandline mode.

    You find source in "/openssl-0.9.2b/crypto/rsa/rsa_gen.c". Here p and q are generated as large primes. Next n is calculated (p*q), then d. Sounds reasonable.

    Now look in "/openssl-0.9.2b/doc/ssleay.txt", search for "=== rsa.doc ===". Here you read that the RSA structure used internally can contain both private & public keys. A public key is determined by the fact that RSA-> d value is null. It is explained that rsa_generate_key should only be used to generate initial private keys.

    Note that you can find a source: "/openssl-0.9.2b/crypto/rsa/rsa_gen.c". Have a look...

    Be your own mini-CA

    Use "req" to create key pairs and certificates. Check out the configuration in "usr/local/ssl/lib/openssl.cnf".

    SSL clients & servers

    Have a look in "/openssl-9.2b/apps". Here you find the source code of e.g. an SSL client (s_client.c) and server (s_server.c). Source code contains lots of explanation.

    OpenSSH

    SSH basics

    There is a set of specifications. There are various implementations, OpenSSH is the de facto standard implementation. Essential components:
    • ssh(1) - The basic rlogin/rsh-like client program.
    • sshd(8) - The daemon that permits you to login.
    • ssh_config(5) - The client configuration file.
    • sshd_config(5) - The daemon configuration file.
    • ssh-agent(1) - An authentication agent that can store private keys.
    • ssh-add(1) - Tool which adds keys to in the above agent.
    • sftp(1) - FTP-like program that works over SSH1 and SSH2 protocol.
    • scp(1) - File copy program that acts like rcp(1).
    • ssh-keygen(1) - Key generation tool.
    • sftp-server(8) - SFTP server subsystem (started automatically by sshd).
    • ssh-keyscan(1) - Utility for gathering public host keys from a number of hosts.
    • ssh-keysign(8) - Helper program for hostbased authentication.

    SSH client

    By default client 'ssh-agent' is installed, which holds private keys for authentication of eg logins or X-session.

    SSH client config

    Explained at openssh.com man page and openbsd.org man page.

    ssh obtains configuration data from the following sources in the following order:
    • command-line options
    • user's configuration file (~/.ssh/config)
    • system-wide configuration file (/etc/ssh/ssh_config)
    For each parameter, the first obtained value will be used. The configuration files contain sections separated by Host specifications, and that section is only applied for hosts that match one of the patterns given in the specification. The matched host name is usually the one given on the command line (see the CanonicalizeHostname option for exceptions).

    Since the first obtained value for each parameter is used, more host-specific declarations should be given near the beginning of the file, and general defaults at the end.

    SSH normally does print out error messages. There are two ways to disable it:
    • In your .ssh/config, a line containing 'Loglevel QUIET' will disable all messages.
    • Using the option -q, or aliasing ssh to ssh -q, will do the same thing.

    SSH client keygen

    Explained at openssh.com keygen page. The key type is indicated with -t. If invoked without any arguments, ssh-keygen will generate an RSA key.

    Normally each user wishing to use SSH with public key authentication runs this once to create the authentication key in ~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk or ~/.ssh/id_rsa. Additionally, the system administrator may use this to generate host keys, as seen in /etc/rc.

    Ed25519: Source: https://linux-audit.com/using-ed25519-openssh-keys-instead-of-dsa-rsa-ecdsa/. Uses an elliptic curve signature scheme which offers better security than RSA, ECDSA and DSA.

    ecdsa-sk (secret key): Source: https://www.bsdnow.tv/328. Hardware backed keys can be generated using "ssh-keygen -t ecdsa-sk" (or "ed25519-sk" if your token supports it). Many tokens require to be touched/tapped to confirm this step. You'll get a public/private keypair back as usual, but the private key file does not contain a highly-sensitive private key but instead holds a "key handle" that is used by the security key to derive the real private key at signing time. So, stealing a copy of the private key file without also stealing your security key (or access to it) should not give the attacker anything. Once you have generated a key, you can use it normally - i.e. add it to an agent, copy it to your destination's authorized_keys files (assuming they are running -current too), etc. At authentication time, you will be prompted to tap your security key to confirm the signature operation - this makes theft-of-access attacks against security keys more difficult too.

    ssh-keygen will by default write keys in an OpenSSH-specific format. This format is preferred as it offers better protection for keys at rest as well as allowing storage of key comments within the private key file itself. In the past the PEM format was used for private keys. This can still be specified using the -m flag.

    SSH client usage

    To start it, just enter ssh in a terminal. To get feedback about what's happening, make sure error messages are not supressed in the config (ref above). And specify -vvv.

    Putty

    PuTTY uses a different key file format. It comes with tools to convert between its own .ppk format and the format of OpenSSH. Info at earthli's putty doc and at ssh.com. Converter guide: https://www.simplified.guide/putty/convert-ppk-to-ssh-key.

    SSH server

    You can install sshd manually. With it comes:
    • configs such as /etc/default/ssh, /etc/init.d/ssh, /etc/init/ssh.conf
    • startup script /etc/network/if-up.d/openssh-server
    • /etc/pam.d/sshd
    • servers such as
      • /etc/ufw/applications.d/openssh-server
      • /usr/lib/openssh/sftp-server
      • /usr/lib/sftp-server
      • /usr/sbin/sshd
    • python? /usr/share/apport/package-hooks/openssh-server.py
    • documentation:
      • /usr/share/doc/openssh-client/examples/sshd_config
      • /usr/share/doc/openssh-server
      • /usr/share/man/man5/authorized_keys.5.gz
      • /usr/share/man/man5/sshd_config.5.gz
      • /usr/share/man/man8/sftp-server.8.gz
      • /usr/share/man/man8/sshd.8.gz

    How to configure this? The sshd reads configuration data from /etc/ssh/sshd_config.

    How to start/stop? After installation, the sshd is started automatically.

    How to generate keys? Use ssh-keygen....on client or server side.

    Windows PC Install cygwin from www.cygwin.com. From Windows to Angkor2Start ssh client on Windows. Test ssh with "ssh marc@192.168.1.5". If you supply marc's password on Angkor2, you are in. Now to get the ssh challenge/response to work, you need to copy the id_rsa.pub file (containing the public key) into the file /marc/.ssh/authorized_keys (for info see man sshd).
    Legacy SCP client and server seems commonly installed. Copy from iMacky to Angkor via "scp marc@192.168.1.5:/Users/shared/final.mov final.mov".

    SSH client seems commonly installed, but server may need manual install eg via Synaptic. Then you have:
    • sshd(8) - Server program run on the server machine. This listens for connections from client machines, and whenever it receives a connection, it performs authentication and starts serving the client. Its behaviour is controlled by the config file sshd_config(5).
    • ssh(1) - This is the client program used to log into another machine or to execute commands on the other machine. slogin is another name for this program. Its behaviour is controlled by the global config file ssh_config(5) and individual users' $HOME/.ssh/config files.
    • scp(1) - Securely copies files from one machine to another.
    • ssh-keygen(1) - Used to create Pubkey Authentication (RSA or DSA) keys (host keys and user authentication keys).
    • ssh-agent(1) - Authentication agent. This can be used to hold RSA keys for authentication.
    • ssh-add(1) - Used to register new keys with the agent.
    • sftp-server(8) - SFTP server subsystem.
    • sftp(1) - Secure file transfer program.
    • ssh-keyscan(1) - gather ssh public keys.
    • ssh-keysign(8) - ssh helper program for hostbased authentication.

    Cross platform, on Mac, on Windows

    I installed the server, then ran ssh-keygen in the homedir of marc3. This saved RSA keypair in marc3sshrsakeypair1. On the Apple, then do a 'ssh marc3@192.168.1.5'. Give the logonpsw and you're in. Alternatively do 'sftp marc3@192.168.1.5' and you can ftp. Use
    • lls (local list of dir) and regular ls
    • lcd (local cd) and regular cd
    • put ...but this only works for individual files

    So a simpe solutions is
    • To pack on the Mac: "tar -c *.jpg > ama.tar"
    • To initiate sftp client on Mac with 'sftp marc3@192.168.1.5'
    • Execute the put
    • To unpack on the Linux: "tar -xvf ama.tar"

    Serpent

    Downloaded .tar.gz file from Anderson's Serpent page. The original file is kept in /Kassandra_Data/AdditionalRPM. Unpacked in / and moved all resulting files into /serpent. What do we have:
    • Floppy 1: Serpent C code, header files, ...
    • Floppy 2: Optimized ANSI C implementation
    • Floppy 3: Optimized Java implementation, based on Cryptix code
    • Floppy 4: ...

    EU DSS

    Documentation

    Installation

    The DSS framework resides at https://github.com/esig/dss. There are at least two ways to use EU DSS: build your own local version (approach 1), or by reference (approach 2).

    Approach 1 local built

    Description
    Download from https://ec.europa.eu/digital-building-blocks/wikis/display/DIGITAL/DSS+v5.11.RC1 in /home/marc/Downloads/EU_DSS - there’s a README and a pom.xml.

    Simple build of the DSS Maven project: 'mvn clean install'. This runs all unit tests present in the modules, which can take more than one hour to do the complete build.

    In addition to the general build, the framework provides a list of various profiles, allowing a customized behavior:
    • quick - disables unit tests and java-doc validation, in order to process the build as quick as possible (takes 1-2 minutes). This profile cannot be used for a primary DSS build (see below).
    • quick-init - is similar to the quick profile. Disables java-doc validation for all modules and unit tests excluding some modules which have dependencies on their test classes. Can be used for the primary build of DSS.
    • slow-tests - executes all tests, including time-consuming unit tests.
    • owasp - runs validation of the project and using dependencies according to the National Vulnerability Database (NVD).
    • jdk19-plus - executed automatically for JDK version 9 and higher. Provides a support of JDK 8 with newer versions.
    • spotless - used to add a licence header into project files.
    Execution
    Commands:
    • cd /home/marc/Downloads/EU_DSS/dss-5.11.RC1/
    • mvn clean install
    Uses: /home/marc/Downloads/EU_DSS/dss-5.11.RC1/pom.xml - which contains a.o. profiles listed above (quick, quick-init, ...).

    Outcome: see /home/marc/Downloads/EU_DSS/dss-5.11.RC1/mvn_clean_install.out.txt and 2022-09-19-mvn-clean-install-output.odt.

    First execution results in failures and skips, but the error reports indicate no errors. Rerun, output in 2022-09-19-mvn-clean-install-output.odt - same failure, no real error.

    Try to use a 'profile':
    • cd /home/marc/Downloads/EU_DSS/dss-5.11.RC1/
    • mvn -X quick install > mvn-quick-install-output-5.txt
    • Outcome: fails: 'Unknown lifecycle phase "quick". '
    • mvn -X install quick > mvn-quick-install-output-6.txt
    • Outcome: fails: Unknown lifecycle phase "quick".
    • HELP: https://maven.apache.org/guides/introduction/introduction-to-profiles.html : Profiles can be explicitly specified using the -P command line flag.
    • mvn -X clean install -P quick > mvn-clean-install-quick-output-7.txt
    • Outcome:
      • Fails at line 54509: [INFO] DSS CRL Parser with X509CRL object ................. FAILURE [ 0.557 s]
      • Fails at line 54568: Failed to execute goal on project dss-crl-parser-x509crl: Could not resolve dependencies for project eu.europa.ec.joinup.sd-dss:dss-crl-parser-x509crl:jar:5.11.RC1: Could not find artifact eu.europa.ec.joinup.sd-dss:dss-crl-parser:jar:tests:5.11.RC1 in central (https://repo.maven.apache.org/maven2)
      • MLS: that might make sense - it's probably in some DIGIT repository - repositories are in the pom.xml - there is a dependency statement in the project pom.xml for 'dss-crl-parser-x509crl'
      • MLS: this leads to the question: how are these dependencies resolved??
      • For help: refer to http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
    Description at http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException:

    QUOTE

    This error generally occurs when Maven could not download dependencies. Possible causes for this error are:

    The POM misses the declaration of the which hosts the artifact. The repository you have configured requires authentication and Maven failed to provide the correct credentials to the server. In this case, make sure your ${user.home}/.m2/settings.xml contains a declaration whose matches the of the remote repository to use. See the Maven Settings Reference for more details. The remote repository in question uses SSL and the JVM running Maven does not trust the certificate of the server. There is a general network problem that prevents Maven from accessing any remote repository, e.g. a missing proxy configuration. You have configured Maven to perform strict checksum validation and the files to download got corrupted. Maven failed to save the files to your local repository, see LocalRepositoryNotAccessibleException for more details. UNQUOTE

    Approach 2 by reference

    In your project you can point to the EU Maven repository so there’s no need to download it. For this approach, add the EU DSS repository into the pom.xml file in the root directory of your project:

    ... cefdigital cefdigital https://ec.europa.eu/digital-building-blocks/artifact/content/repositories/esignaturedss/

    There are many Maven modules provided: https://ec.europa.eu/digital-building-blocks/DSS/webapp-demo/doc/dss-documentation.html#MavenModules

    Bouncy Castle

    BC documentation

    BC installation on GrayTiger

    According BC book

    GrayTiger September 2022 runs OpenJDK 1.8 as included with IntelliJ installation. Hence BC download from https://www.bouncycastle.org/latest_releases.html, into /Downloads:
    • the bc provider: bcprov-jdk18on-171.jar
    • the pkix classes: bcpkix-jdk18on-171.jar
    • the test classes: bctest-jdk18on-171.jar
    • ...there's a lot more...
    According the BC book:
    • STEP 1 For Java 1.4 to Java 1.8 you can add the provider by inserting its class name into the list in the java.security file in $JAVA_HOME/jre/lib/security
      • MLS: the jre (java runtime env) is probably /home/marc/.jdks/openjdk-18.0.2/ ... which contains a.o. lib/security - there are many jres, eg for Protégé, PDF-Over, ...
      • The provider list is a succession of entries of the form “security.provider.n” where n is the precedence number for the provider, with 1 giving the provider the highest priority.
      • To add Bouncy Castle in this case you need to add a line of the form: security.provider.N=org.bouncycastle.jce.provider.BouncyCastleProvider<
        • MLS: in /home/marc/.jdks/openjdk-18.0.2/lib/security - there are multiple files - default.policy, CAcerts, blocked certs... none has 'security.provider.' inside
        • MLS: in /home/marc/.jdks/openjdk-18.0.2/conf/security there is a java.security file which has the provider statements
      • Where N represents the precedence you want the BC provider to have. Make sure you adjust the values for the other providers if you do not add the BC provider to the end of the list.
      • See next section below for how to do this exactly DONE added 'security.provider.13=org.bouncycastle.jce.provider.BouncyCastleProvider'
    • STEP 2 Then you need to make sure the provider is on the class path, preferably in the $JAVA_HOME/jre/lib/ext directory so that it will always be loaded by a class loader the JCA trusts.
      • Alternative 1 (preferred) copy the BC jars to $JAVA_HOME/jre/lib/ext - i.e. /home/marc/.jdks/openjdk-18.0.2/lib - DONE (copied under /bc)
      • Alternative 2 Set the classpath to include where BC is downloaded - see java_anchor - however this seems less recommended

    Alternative description

    Install and Configure Bouncy Castle Provider
    • Download Bouncy Castle JAR
      • Download bouncy castle provider JAR from BC WebSite.
      • Once downloaded, navigate to Java installed directory.
      • Copy the downloaded jar to “/jre/lib/ext/” (Linux)
    • Enable Bouncy Castle Provider:
      • Navigate to “/jre/lib/security/” (Linux)
      • Open java.security file and add entry for Bouncy Castle. Add Bouncy Castle Provider at end of list by increment the provider count. Updated provider list would looks like: # # List of providers and their preference orders (see above): # security.provider.1=sun.security.provider.Sun security.provider.2=sun.security.rsa.SunRsaSign security.provider.3=sun.security.ec.SunEC security.provider.4=com.sun.net.ssl.internal.ssl.Provider security.provider.5=com.sun.crypto.provider.SunJCE security.provider.6=sun.security.jgss.SunProvider security.provider.7=com.sun.security.sasl.Provider security.provider.8=org.jcp.xml.dsig.internal.dom.XMLDSigRI security.provider.9=sun.security.smartcardio.SunPCSC security.provider.10=sun.security.mscapi.SunMSCAPI security.provider.11=org.bouncycastle.jce.provider.BouncyCastleProvider
    Check Bouncy Castle Provider installation package org.learn.bc; import java.security.Provider; import java.security.Security; public class BouncyCastleDemo { public static void main(String[] args) { String providerName = "BC"; Provider provider = Security.getProvider(providerName); if (provider == null) { System.out.println(providerName + " provider not installed"); return; } System.out.println("Provider Name :"+ provider.getName()); System.out.println("Provider Version :"+ provider.getVersion()); System.out.println("Provider Info:" + provider.getInfo()); } }

    BC in IntelliJ

    How to use BC provider etc in IntelliJ.... Apparently IntelliJ stores some info in /home/marc/.java. There is also /home/marc/.jdks/openjdk-18.0.2 which seems to contain the entire jdk.

    Cryptix (legacy - predecessor to Bouncy Castle)

    What is it - see www.cryptix.org

    Apparently comes in at least two flavours:
    • a provider (e.g. cryptix32) to be used under the Sun JCA - comes with source code, tests and utilities (e.g. to create safe archives)
    • a JCE - i.e. an implementation of the official JCE 1.2 API as published by Sun (apparently sometimes authorisation required to download)
    Very interesting are the source code examples, the tests and the utils such as SCAR - a crypto-secure archive tool.

    Legacy - Cryptix installation on Malekh

    1) Installing the class files (starting from the source code): follow the README:
    1. download zipfile with sources into /Kassandra_Data/AdditionalNonRPM/Cryptix3JCE1.1-src/Cryptix-src...
    2. extract into /Cryptix3
    3. 1.1 install provider in three steps: "cd /Cryptix3/src/cryptix/provider", then "javac Install.java", finally execute it:
      • "CLASSPATH=/Cryptix3/src" --- so you break-off just before the package name
      • "export CLASSPATH"
      • "java cryptix.provider.Install" ---> CLASSPATH || cryptix.provider.Install are now automatically concatenated, and the runtime finds Install.class
      • this is concluded by the message: "security.provider.2=cryptix.provider.Cryptix" is added in "/usr/lib/java/bin/lib/security/java.security". Remove manually if needed.
    4. 1.2 compile: from the top directory, run the make_snap or build shellscripts residing in "/Cryptix3/util" What do we already have: "src", "util(shell scripts)", "doc", "guide (quite nice on crypto & security)", "images", "license", i.e. preparatory stuff. What will be added by compiling: "/build" and the class files below. So I ran "cd /Cryptix3", "sh util/build.sh". This resulted in 1 warning (deprecated APIs).
    5. 1.3 test: sources in /Cryptix3/src/cryptix/test, and they include a statement "package cryptix.test", class files in /Cryptix3/build/classes, hence:
      • "cd /Cryptix3/build/classes"
      • "java cryptix.test.TestMD2" and "java cryptix.test.TestAll" --- OK.

    2) Installing the documentation:
    • "cd /Cryptix3/doc" and ". build.sh" - first execution only partial success (930 items added, 104 errors), due to not setting the env variable JDK_SOURCE, need to re-do this...
    • nevertheless, there is quite some useful documentation in LinuxWeb - ITjava
    • additional documentation available in "/Kassandra/Data/AdditionalNonRPM/Cryptix3...doc..."

    HISTORY - Cryptix installation on Avina - SuSE 6.4

    /Cryptix3 directory copied over from Malekh. Running 'ExamineSecurity' obviously responds that the cryptix provider is not (yet) installed. Hence:
    • "cd /Cryptix3/src/cryptix/provider", then "javac Install.java",
    • execute Install via:
      • "CLASSPATH=/Cryptix3/src" --- so you break-off just before the package name
      • "export CLASSPATH"
      • "java cryptix.provider.Install" ---> CLASSPATH || cryptix.provider.Install are now automatically concatenated, and the runtime finds Install.class - OK

    You can see what Install does by peeking into /Cryptix3/src/cryptix/provider/Install.java: it installs Cryptix in the /java.security file (actually '/usr/lib/java/lib/security/java.security'). Feedback from Install: ---Examining the Java installation at /usr/lib/java The following lines were added to /usr/lib/java/lib/security/java.security: # Added by Cryptix V3 installation program: security.provider.2=cryptix.provider.Cryptix To uninstall Cryptix, remove these lines manually----

    Try to run my old programs in /Cryptix3/build/classes such as modinverse1: OK. I assume this works since the executables were copied over and the provider is re-installed... Demos are discussed in JTK1.

    IX.105.4 HISTORY - Cryptix32 installation on tux

    Downloaded Cryptix32. You get:
    • Cryptix32/cryptix32.jar => the class files in jar format (so you don't have to compile all the sources)
    • Cryptix32/doc => the API documentation
    • Cryptix32/src:
      • CVS
      • cryptix: here you find a lot of Java source code: provider, test, tools, util, ...
      • netscape
      • xjava
    How to proceed: check out the website: 1) "Add the JARs to your classpath...": First "CLASSPATH=/Cryptix32/cryptix32.jar:/CryptixSources", then "export CLASSPATH". Check with "echo $CLASSPATH". You can read out the jar with "jar tvf cryptix32.jar". 2) Install provider: >
    1. cd Cryptix32/src/cryptix/provider
    2. javac Install.java
    3. cd /Cryptix32/src
    4. java cryptix.provider.Install =>
      • Examining the Java installation at /usr/lib/jdk1.2.2/jre
      • The following lines where added to /usr/lib/jdk1.2.2/jre/lib/security/java.security
      • "Added by Cryptix V3 installation program:"
      • "security.provider.2=cryptix.provider.Cryptix"
      • To uninstall Cryptix, remove these lines manually.
    Compile (in the previous version /util contained the compilation scripts ... what happened here?) => WRONG ASSUMPTION. Since you can specify the classfiles in jar format through the CLASSPATH (refer to point "1)" above) there is no need to compile. You can execute right away.

    Next steps: start working with certificates (part of Java2 - at least for the basic stuff, without the extensions), and try Rijndael (ref below).

    IX.106 Cryptix ASN1

    Basics

    Software that allows you to:
    • generate .java source files that model ASN1 types defined in an input file
    • compile those sources so they can be used, either on the fly or later
    • use those sources in your own applications

    Installation

    STEP-1. Installed in CryptixASN1. Test/demo sources in /CryptixASN1/jar/src/cryptix/asn1/test. Execute tests via:
    • cd /CryptixASN1/jar/src/cryptix/asn1/test
    • CLASSPATH=/CryptixASN1/jar/cryptix-asn1-19991128-a6.jar
    • export CLASSPATH
    • javac Main6.java
    • CLASSPATH=/CryptixASN1/jar/cryptix-asn1-19991128-a6.jar:/CryptixASN1/jar/src/cryptix/asn1/test
    • export CLASSPATH
    • java Main6
    • ---> wrong name = bad location ...
    TRY AGAIN. Move Main6.java to /Java02Security, and removed the "package" statement inside. Copied cryptix.asn there too. Now fails since parser is not found... CLASSPATH=/CryptixASN1/jar/cryptix-asn1-19991128-a6.jar:/CryptixASN1/jar/cryptix/asn1/lang:/CryptixASN1/jar/cryptix/asn1/encoding:/Java02Security: STEP-2. Wait a moment - prerequisite: needs javacc and jmk. Install those first... into /Java53MetaCC (javaCC_0.class, javaccdocs.zip, jmk.jar ...) .
    • jmk: put a copy of jmk14.jar in /CryptixASN1/jar. This allows "java -jar /CryptixASN1/jar/jmk14.jar" (which will use the makefile.jmk).
    • javacc: probably used automatically if you put it on the classpath

    Hmmhm...

    Rijndael

    Rijndael java code is included in Cryptix, use the cryptix32 provider. Start by using "TestRijndael.java":
    • you may have to set the classpath to the jar (ref above), then continue by:
    • "cd /Cryptix32/src/cryptix/test" (position yourself for compilation)
    • "javac TestRijndael.java" (compile)
      • Problem: compilation error: the "import cryptix.provider.key. ..." fails
      • solution: "export CLASSPATH=/Cryptix/Cryptix32/cryptix32.jar"
    • "cd /Cryptix32/src" (position yourself just above the full 'program'-name you specify below for execution)
    • "java cryptix/test/TestRijndael" (you must use this qualified name or the execution fails)
      • Problem: "no such provider" exception
      • solution: install provider
        1. cd Cryptix32/src/cryptix/provider
        2. javac Install.java
        3. cd /Cryptix32/src
        4. java cryptix.provider.Install =>
          • Examining the Java installation at /usr/lib/jdk1.3/jre
          • The following lines where added to /usr/lib/jdk1.3/jre/lib/security/java.security
          • "Added by Cryptix V3 installation program:"
          • "security.provider.3=cryptix.provider.Cryptix"
          • To uninstall Cryptix, remove these lines manually.
    Next step is to use Rijndael in a program. This is done via the Cryptix provider. Check out the test and util programs.

    IX.108 Baltimore KeyTools

    Downloaded them into /Baltimore. Also downloaded the Sun xml parser and api into /Java52XML/....
    1. safeguarded a copy of /usr/lib/jdk1.2.2/jre/lib/security/java.security (because it currently contains the Cryptix provider)
    2. "CLASSPATH=/Baltimore/libs/jpkiplus.jar:/Baltimore/libs/jcrypto_1.1.jar:/Baltimore/libs/jce.jar:/Java52XML/jaxp1.0.1/jaxp.jar:/Java52XML/jaxp1.0.1/parser.jar"
    3. "export CLASSPATH" "echo $CLASSPATH"
    4. "javac BaltKPG.java"

    Mathematica

    Initial installation

    Installed from Berkeley's CD. After installation comes with following error message:

    xset: bad font path element (#38), possible causes are: Directory does not exist or has wrong permissions Directory missing fonts.dir Incorrect font server address or syntax

    Solution: X server's font path may need updating. Probably via "xset fp+ ....". OK problem was that the X server's path referred to /cdrom/.... . Restarting the X server seems to resolve the problem already.

    Reinstall after upgrade

    Modifications to your platform result in a new MathID, which requires a new password. Even insertion/removal of PCMCIA card leads to "missing password".

    Use

    Create new notebook. Use shift-enter to calculate.

    Factoring integers: FactorInteger[n]. There is also Lenstra's FactorIntegerECM. This extends Mathematica's factoring up to approximately 40 digits. Prime digits are approximately 18 digits long then. Very nice help available.

    Modular: Mod[k, n].

    Modular inverse: you can use PowerMod (a, b, n). This returns a^b mod n. E.g. 25 mod 6 can be done as PowerMod[5, 2, 6]. This can then be used to find modinv by taking b=-1. To find the modinv of 3 mod 7: PowerMod [3, -1, 7]. You can also calculate beyond the simple modinv -1. You can do -2 etc. also.

    FLINT - Michael Welschenbach

    Copied from CD in /flint (rijndael not copied yet). Software is the FLINT/C function library (functions for large integers in number theory and cryptograph). The library contains a number of modules for arithmetic, number theory, tests, RSA and Rijndael.

    For testrand.c: try make:
    • cd /flint/test then "make": fails
    • try "make -d" to see errormsgs - still not clear why it fails

    For testrand.c: try manual gcc.
    • gcc -v -o testrand testrand.c lacks header files
    • so copied flint.h and assert.h into /usr/include ===> problem gone but now list of unresolved references to the functions that are defined in flint.c itself
    • solution is to link the full flint.c statically into your executable (for which you have to provide the full path), hence:
    • gcc -v -o testrand testrand.c /flint/src/flint.c
    If you get msgs that libflint is not found, you can compile with the following flags: "gcc ... -lflint -L/flint/lib" For rsademo.cpp (C++ Nice overview of C++ at www.cplusplus.com.)
    • since it's C++ we'll need the stdc++ library, including:
      • header files such as iostream.h fstream.h and iomanip.h
      • the library itself
    Suse packages such as libgpp contain C++ libraries such as libg++ and libstdc++, both the header files and the actual libraries. Doing a "locate libstd" finds a list of /usr/lib/libstdc++... libraries. They seem to be shared and static libs. They include:
    • /usr/lib/libstdc++-3-libc6.1-2-2.10.0.so - running "file" on this tells us it is a ELF 32-bit shared object, not stripped

    • /usr/lib/libstdc++-3-libc6.2-2-2.10.0.a - running "file" on this tells us it is a current ar archive

    • /usr/lib/libstdc++-3-libc6.2-2-2.10.0.so - another so

    Remember you can do "ldconfig -p" for an overview of existing libraries. You can do "ldd" to find out what a program needs. Problem on imagine/tecra780DVD: compiling results in "gcc installation problem - cannot execute cc1plus" However, on the Satellite2060CDS compile goes OK. Guess I should reinstall the tecra from scratch (Yast2 doesn't work either).
    • append the c++ library to LD_LIBRARY_PATH in the environment variables:
      • use "env" to list
      • then "LD_LIBRARY_PATH=/opt/mozilla//:/opt/mozilla/components//:/opt/kde/lib:/opt/kde2/lib:/usr/lib/libstdc++.so.2.9"
      • then "export LD_LIBRARY_PATH" and "env" to check again
    • use the -l and -L switches at ompile/link time
      • specify -l as -lstdc++
      • specify -L as -L/usr/lib/libstdc++.so.2.9

    EJBCA Beta2

    Requires Ant and Jboss/Tomcat as prerequisites. Then deploy into Jboss.

    EJBCA builds on EJB 1.1, and relies on:

    • BouncyCastle JCE provider - OpenSource JCE crypto provider from www.bouncycastle.org (jar included)
    • JBoss - OpenSource J2EE application server (jar claimed to be included - WRONG INFO)
    • Tomcat - servlet container, invoking servlets for users and handles JSP (jar claimed to be included - WRONG INFO)
    • log4j - from Apache Software Foundation (jar included)
    • JUnit - can be obtained from www.junit.org, (jar included)

    Building EJBCA

    Building EJBCA #1: start-up:
    • unzip into '/', creates /ejbca
    • cd ejbca
    • ant (there's a build.xml present) ===> goes fine, creates subdirs, compiles classes etc.
    • ant javadoc (to build the doc) ===> goes fine
    • copy the Bouncycastle JCE provider /ejbca/lib/jce-jdk13-.jar to the directory jboss/lib/ext ===> however
      1. at first I looked for bcprov*.* and I thought the provider was not supplied (hence I downloaded and manually installed to no good use); it's provided but called jce-jdk13...
      2. and neither JBOSS is present (has to be downloaded too)
      3. COPY LATER WHEN JBOSS IS PRESENT
    Building EJBCA #2: JBoss: Downloaded jboss 2.4.3, the "current stable version", extracted into /JBoss-2.4.3_Tomcat-3.2.3, with a jboss and a tomcat subdirectory. Jboss provides a basic EJB container. Furthermore:
    • JNDI is used to find the remote and home interfaces of the beans.
    • Security is based on JAAS.
    • If the JVM which is used has HotSpot support, it is used.
    • Crimson is used as XML parser.
    • Log4j is used as logger.
    • Config for JBOSS: /JBoss-2.4.3_Tomcat-3.2.3/jboss/conf/default/
      • standardjboss.xml
      • jboss.conf
      • jboss.jcml
      • jboss.properties
    • Security config for JBOSS: /JBoss-2.4.3_Tomcat-3.2.3/jboss/conf/default/
      • server.policy
      • auth.conf
    • Logfile of JBOSS: /JBoss-2.4.3_Tomcat-3.2.3/jboss/log
      • server.log
    • JBoss comes with a test servlet: point your browser to imagine:8080/jboss/index.html.
    • To dynamically administer JBoss services (i.e. start, stop, ...) the MBeans: point browser to localhost port 8082. Particularly JNDIview is helpful.
    • Monitor client: /JBOSS_HOME/jboss/admin/client/monitor.jar: try "java -jar client/monitor.jar"

    Building EJBCA #3: Tomcat: TOMCAT is the servlet container with a JSP environment. A servlet container is a runtime shell that manages and invokes servlets on behalf of users. It is used in the official reference implementation for the Java Servlet and the JavaServer Pages. Tomcat is Servlet API 2.2 and JSP 1.1 compliant container (remember JServ was only Servlet API 2.0 compliant). It was originally intended to be deployed under Apache, IIS, Enterprise Server or the like. Howeverm it can be used stand-alone also. Tomcat is part of Jakarta.Apache.org. Tomcat documentation is available in /JBoss-2.4.3_Tomcat-3.2.3/tomcat/doc. The two main config files are server.xml and web.xml (defining your servlets and other components) .
    Web applications live in "web application archives" which exist in two formats:
    • unpacked hierarchy of directories and files (typically: development)
    • packed hierarchy for deployment, the "wars"
    The top-level directory is the application root, where html and JSP pages are located. At the moment of deployment, a context indication will be added. If tomcat is started, you can access its default homepage on 127.0.0.1:8080. There is an admin page at 127.0.0.1:8080/admin (but what's the password?) So for EJBCA:
    • copied /root/jce-jdk13....jar into /JBoss-2.4.3_Tomcat-3.2.3/jboss/lib/ext
    • set JBOSS_HOME to right value ("export JBOSS_HOME=....")

    Building EJBCA #4: starting JBoss:

    Starting jboss: "/JBoss-2.4.3_Tomcat-3.2.3/jboss/bin/run.sh"
    • Problem 1: Exits since /org/jboss/Main not found... . Try "sh -x /Jboss...." to visualise the batch-file substitutions.
    • Solution 1 You simply need to "cd /JBoss-2.4.3_Tomcat-3.2.3/jboss/bin". Then ". run.sh".
    • Problem 2: starts but automatically calls JDK 1.2 which is of course not high enough. So I must specify to use the J2EE SDK. My JAVA_HOME does not seem to be used. I assume the problem comes from the java wrapper which still points to Java 1.2.2 . How do I change this? Probably due to /usr/lib/java being a link to /usr/lib/jdk1.2.2. The wrapper is /usr/lib/jdk1.2.2/bin/java, pointing to .javawrapper....
    • Solution 2: make /usr/lib/java link to /jdk1.3, so I did:
      1. rm /usr/lib/java
      2. ln -s /jdk1.3.1_01 /usr/lib/java
      3. Now java version informs me of 1.3.1 --- OK.

    Restarting jboss: "/JBoss-2.4.3_Tomcat-3.2.3/jboss/bin/run.sh" or "run_with_tomcat.sh" now calls JDK 1.3. Better. Jboss started 46 services.... How do you stop it? Cntl-C works of course. Then:
    • KEYSTORE: copied the keystore from /ejbca/src/ca/keyStore/server.p12 to /ejbca/tmp = MISTAKE: error in /ejbca/runtest.sh persists: it must be the 'hardcoded path', i.e. "/tmp", nowhere else
    • SET JBOSS_HOME: export JBOSS_HOME=/Jboss-2.4.3_Tomcat-3.2.3/jboss
    • SET JAVA_HOME: export JAVA_HOME=/usr/lib/java
    • DEPLOY: 'cd /ejbca' - then - '. deploy.sh' (which copies /ejbca/dist/*.j|war into JBOSS_HOME/deploy
    • msg: 'bcprov.jar must be copied to /jboss/lib/ext' - which it is... -> checked the script: this is just an echo (stupid)
    • RUN:
      • cd /JBoss-..../jboss/bin
      • /JBoss-... / /run_with_tomcat.sh ---> watch start-up msgs
    • RUNTESTS: /ejbca/runtest.sh - if you get connections refused it means you forgot to deploy - the tests can be found at "/ejbca/src/java/se/anatom/ejbca/ca/auth/junit/...".
    • also nice: via browser to /ejbca/src/webdist/dist.html
    • You can apply for certificates at http://127.0.0.1:8080/apply/request/index.html.

    In /ejbca/dist you find both the .jar and .war files. It's the .war files which provide the html/servlets/JSP's.

    Building EJBCA #5: user/manager views:

    VIEW 1 END-USER
    • problem* since /ejbca/dist/webdist.war contains the dist.html which is deployed, which includes a hardcoded ref to 127.0.0.1, it does not work from any other remote platform. Have to hardcode the right IP here. The html is deployed in /jboss-tomcat.../jboss/deploy. The format is .war - whatever that is. How to recreate it? Source resides in ejbca/src/webdist/dist.html. Building was through ant, with /ejbca/build.xml .
    • solution* modify ip address in dist.html (e.g. to 192.168.0.7), recompile with ant. Redeploy. OK.
    1. 1 Applying for a certificate
      • STEP-1: RA first needs to create the user via the ra command (ref infra).
      • STEP-2: user applies via 127.0.0.1:8080/apply.
      • STEP-3: user fetches cert via 127.0.0.1:8080/webdist - if done under Netscape you can also see the cert in the browserstore.

      Cert types: from CA to end user, ref ra.java source (ejbca.admin) Requesting a cert via the browser fails if Tomcat does not find the javac. That's why you must set JAVA_HOME.
    2. 2 Checking out other certs via 127.0.0.1:8080/webdist/dist.html If you download a cert into e.g. /root/john.cer, you can check it with Sun's keytool: "keytool -printcert -file /root/john.cer"
    3. 3 You can use /Java03dmf/cmf002list to get a more detailed view.
    4. 4 Sampleauth via 127.0.0.1:8080/sampleauth Authentication is performed against database: /ejbca/src/sampleauth/database/dbUsers.txt
    • problem* downloading the root cert fails on Win2000 client ... fails to find javac...

    • solution is to

      • stop ejbca

      • execute "export JAVA_HOME=/usr/lib/java" in the shell

      • restart - OK

    • problem* certificate has expired or is not yet valid says the browser: indeed, Imagine's clock is an hour ahead of Kassandros...

    • solution: change via "date"

    VIEW 2 MANAGER

    You use the ca / ra shellscripts. Find a user:

    • cd /ejbca

    • . ra.sh finduser foo

    Add a user: ". ra.sh adduser theo oeht "CN=theo,O=AnaTom,C=SE" theo@theo.com 1" - carefull with the syntax here, no space in CN=etc...

    Certificate type

    Can be obtained via ". ra.sh adduser". The possible values are:

    • 1 end-user

    • 2 CA

    • 4 RA

    • 8 RootCA

    • 16 CAadmin

    • x032 RAadmin

    Userstatus

    Can be obtained via ". ra.sh setuserstatus". The possible values are:

    • 10 new

    • 11 failed

    • 20 initialised

    • 30 inprocess

    • 40 generated

    • 50 historical

    IX.112 OpenSC smartcards

    IX.112.a Prereq: openssl and pcsc-lite

    Since PCSC Lite won't install without openssl in place, first installed that in / . Then install pcsc-lite.

    IX.112.b OpenSC - libopensc

    libopensc is a library for accessing SmartCard devices using PC/SC Lite middleware package. It is also the core library of the OpenSC project. Basic functionality (e.g. SELECT FILE, READ BINARY) should work on any ISO 7816-4 compatible SmartCard. Encryption and decryption using private keys on the SmartCard is at the moment possible only with PKCS#15 compatible cards, such as the FINEID (Finnish Electronic IDentity) card manufactured by Setec.

    First attempt to install failed due to lack of library lpcsclite - hence first install this.

    Second attempt goes better but make fails on lacking OpenSSL.

    Downloaded and installed OpenSSL under /OpenSC.

    Still fails. Reinstall OpenSSL and OpenSC both straight under "/". OK.

    Now fails on lack of "-lfl" I guess that is a library. Resolved by installing package flex. We have:
    • doc and source code under /opensc-0.7.0 - here you find include files such as opensc.h for compiling against the library
    • library files etc in /usr/local/lib - including libopensc.....
    • binaries of tools under /usr/local/bin:
      • opensc-tool - use "opensc-tool -D -ddddd" - REMEMBER TO FIRST START PCSCD.
      • opensc-explorer - "no readers found" - so how to configure it? /etc/reader.conf
      • pkcs15-tool such as pkcs15-init

    Does not work with Towitoko reader. According to godot, have to download latest version from CVS, then run the bootstrap script. Which fails due to lack of tools: autoconf, automake and libtool. Downloaded autoconf and automake, apparently libtool is already installed with SuSE 7.2. Make sure to install in / rather than in /root. Each of them has to be installed via ./configure /make etc... check the INSTALL. Autoconf has a make check possibility, quite some checks did fail (but aclocal passed). I did not install libtool since that was already present. Tried /opensc/bootstrap which failed. Installed libtool from gnu.org. Then back to /opensc. Tried bootstrap again - OK. Tried configure again OK. But the make install fails.

    Next round after re-installing pcsc-lite (now the older version 1.0.1). Do a bootstrap....a make....still fails....back to godot. Tried again to download latest version 070,,,, configure/make/make install goes ok. Now how to configure the readers....

    IX.112.c PCSC Lite

    APPARENTLY have to get PCSC Lite from www.linuxnet.com/middle.html first. This provides (use kpackage):
    • the library libpcsclite.so.0.0.1 in /usr/local/lib
    • pcscd - daemon - apparently in /usr/local/sbin
    • documentation in /usr/local/doc on pcsclite and muscleapi
    • winscard.h and pcsc.h

    Since the configure of opensc fails - ref effort together with Godot - he suggested to downgrade to pcsc-lite-1.0.1. Downloaded from www.linuxnet.com/middleware/files/ pcsc-lite-1.0.1,tar.gz . Unpacked into /pcsc-lite-1.0.1 . Then do a configure, make, make install . OK, you can start the daemon via "pcscd".

    IX.112.d Towitoko driver

    This library provides a driver for using Towitoko smartcard readers under UNIX environment, serial and USB interfaces. It requires PCSC Lite, a smartcard and a reader. Smartcard API's provided: * CT-API 1.1 and CT-BCS 0.9. * PCSC Lite. See http://www.linuxnet.com for download and documentation (pretty unclear ... is it provided or is it a prerequisite?) Installation:
    • configure the serial port as for a modem
    • from the serial howto: remember serial port is typically something like /dev/ttyS0 - dmesg shows device ttyS00 (a synonym for /dev/ttyS0) is a 16550A UART )
    • issuing "setserial -ga /dev/ttyS0" comes back with a reasonable answer on IRQ etc
    • cd /OpenSC/towitoko-2.0.7
    • ./configure (went OK, even without PCSC Lite installed)
    • make (files will go into /usr/local/bin etc)
    • make check (run self-tests)
    • make install

    First round apparently went smooth, but the doc states the files go into /usr/local/bin and there is nothing there.... Ooops this is a mistake in the doc. You find it all in /usr/local/towitoko. The main thing seems to be a shared library. There are man pages but they do not seem to work. There are:
    • bin: tester - this allows direct read/write to the card - call via "/usr/local/towitoko/bin/tester" - THIS WILL NOT WORK IF THE DAEMON RUNS (pcscd)
    • include: ctapi.h and ctbcs.h
    • lib: various libs, including libtowitoko.so.2
    • man: some manpages - but how to read them ...
    Mind you, there also useful info in /towitoko-2.0.7/doc ... even on design... Using the "tester", you learn the I2C cards are memorycards, 256 bytes. Their ATR (Answer To Reset) is A2 13 10 91. However, the card also contains 2 KBit EEPROM. How to write there? /etc/reader.conf Config: /usr/local/towitoko/lib COM1 = CHANNELID 0x0103F8

    IX.112.e PCSC-tools

    Via Danny De Cock. http://ludovic.rousseau.free.fr/softwares/pcsc-tools/

    IX.112.f Musclecard PKCS11 framework - DROP FOR TIME BEING

    Requires: pcsc-lite-1.1.1 or higher This framework works on nearly all platforms and provides a pluggable architecture for smartcards and cryptographic tokens. Send and receive signed and encrypted email, authenticate to SSL sites all using your smartcard. With tools like XCardII and MuscleTool, manage your card, and personalize it to suite your needs. To install, first make sure pcsc-lite-1.1.1 is installed. Then install each of the plugins for MuscleCard and Cryptoflex. Then install the PKCS11. Once this is installed you will have a /usr/local/lib/pkcs11.so In Netscape or Mozilla simply use this path and the name "Muscle PKCS#11" and you are ready to begin.

    IX.113 XML signing - XMLSIG

    TRY1 - FAILED - Download binary into Java62MXLSEC

    Start with Apache's software, downloaded xml-security-bin-1_0_5D2.zip into /. Extract with Karchiveur, put it in /Java62XMLSEC. According to the INSTALL, it includes implementations of W3C recommendations "Canonical XML" and "XML Signature Syntax and Processing". Basically, this means that you can create and verify digital signatures expressed in XML and sign both XML and/or arbitrary contents. Whether you choose the binary or the source version it seems you need to run ANT - hence first fix the path statement for ANT. -1- I started with downloading the binary version and running ANT. Then you get an error since the classfile for ant.md5task is not found. Original statement in BUILD.XML: Wrongly updated statement in BUILD.XML: Rightly updated statement in BUILD.XML: this results in successful built. From now on you can run 'ant' and you get an explanation of what you can do. However, you seem to need the sources to run e.g. the examples. -2- Now download sources but be carefull not to overwrite existing stuff - download in other dir.

    TRY2 - FAILED - Download sources into Java62MXLSEC2

    Specify the classpath as described above for /Java62XML2. Do a full ant compile. Problem with the import statements. Need a way to specify the prefix /Java62XMLSEC2

    TRY3 - FAILED - Download sources into /

    Fails with 'cannot resolve symbol' for sources which are indeed not yet present such as XpathAPI. Looked into the INSTALL file:
    • download Xerces-J-bin.2.0.0.zip. Not found, downloaded 2.2.1 instead, and extracted into /xerces-2-2-1 .
    • download log4j - manual install and test...

    TRY4 - OK - Download sources and binaries into /

    Run ant compile. OK. Run ant doc - problem (why?). Run ant javadoc - OK but warnings. Full API doc in /build. Main info found in / at:
    • /build
    • /build/doc/html/index.html is a good starting point
    • /build/classes contains all the executables - maybe interesting to put them into a single jar
    • /src
    • /src-samples
    • /libs, where many jars are stored such as bouncycastle, xalan, xerces, ...
    • /data, where many xml-data files are stored
    Using it: see jtk1.html.

    IX.114 Encryption file system encfs

    IX.114.1 Installation of encfs

    as per https://help.ubuntu.com/community/FolderEncryption marcsel@marcsel:~$ sudo apt-get install encfs [sudo] password for marcsel: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: librlog1c2a The following NEW packages will be installed: encfs librlog1c2a 0 upgraded, 2 newly installed, 0 to remove and 15 not upgraded. Need to get 297kB of archives. After this operation, 1274kB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://dell-mini.archive.canonical.com hardy/universe librlog1c2a 1.3.7-1 [26.3kB] Get:2 http://dell-mini.archive.canonical.com hardy/universe encfs 1.3.2-1-1 [270kB] Fetched 297kB in 0s (350kB/s) Selecting previously deselected package librlog1c2a. (Reading database ... 100262 files and directories currently installed.) Unpacking librlog1c2a (from .../librlog1c2a_1.3.7-1_lpia.deb) ... Selecting previously deselected package encfs. Unpacking encfs (from .../encfs_1.3.2-1-1_lpia.deb) ... Setting up librlog1c2a (1.3.7-1) ... Setting up encfs (1.3.2-1-1) ... Processing triggers for libc6 ... ldconfig deferred processing now taking place marcsel@marcsel:~$ THEN: add the module fuse in etc/modules (ok, was already there) sudo adduser fuse (ok, same) sudo chmod +x /usr/bin/fusermount (fails, no such file or directory)

    IX.114.2 Use of encfs

    Create encrypted directory: 'encfs ~/securestorage ~/visible' (AES 192 selected) Work in /visible. When finished 'cd ..' Then remove /visible by 'fusermount -u ~/visible' Later to reaccess: 'encfs ~/securestorage ~/visible' You can use 'encfsctl' later to change your password. DOES NOT SEEM TO WORK - go for alternative.

    IX.116 Truecrypt

    Seems to work fine, cross-platform.

    zed

    Installation on GrayTiger

    Download from https://www.zedencrypt.com/ into /home/marc/Downloads/zed/Ubuntu 22.04. Then 'sudo dpkg --install ZEDFREE-2022.4.Ubuntu22.04.amd64.deb'. ERRORS: dpkg-deb: error: archive 'ZEDFREE-2022.4.Ubuntu22.04.amd64.deb' uses unknown compression for member 'control.tar.zst', giving up dpkg: error processing archive ZEDFREE-2022.4.Ubuntu22.04.amd64.deb (--install): dpkg-deb --control subprocess returned error exit status 2 Errors were encountered while processing: ZEDFREE-2022.4.Ubuntu22.04.amd64.deb Conclusion: try Windows.

    Erase, shred, wipe

    No erase. Linux shred and wipe commands ... what does danny use? Shred and wipe. According to man page of shred: it does not work on file systems such as ext3 ... so no use? Installed 'wipe' via 'sudo apt-get install wipe'. So: options 1 edit - encrypt - decrypt - reencrypt to new file - 'wipe' plaintext 2: simply reencrypt the decrypted file so it becomes unusuable...this will not work since it does leave the plaintext intact. Ref email danny

    IX.118 Opencryptoki - PKCS#11

    Ref to '/usr/share/doc/opencryptoki'. OpenCryptoki version 2.2 implements the PKCS#11 specification version 2.11. This package includes several cryptographic tokens, including the IBM ICA token (requires libICA, which supports zSeries CPACF and LeedsLite hardware) and an OpenSSL-based software token. For execution refer to http://www-128.ibm.com/developerworks/security/library/s-pkcs/index.html Further: openCryptoki defaults to be usable by anyone who is in the group ``pkcs11''. In this version of openCrypoki, the default SO PIN is 87654321, and the default user PIN is 12345678. These should both be changed to different PIN values before use. You can change the SO PIN by running pkcsconf: % pkcsconf -I You can change the user PIN by typing: % pkcsconf -u You can select the token with the -c command line option; refer to the documentation linked to above for further instructions.

    ACR 38 and middleware for BeID

    At a glance

    You'll need:
    • usbmgr, loaded first at boot and normally already in place
    • pcsclite (libpcsclite and pcscd) - might also be preinstalled, check this first because the reader's driver is installed "underneath pcsc"
    • ACR 38 reader and its driver package
    • furthermore:
      • pcsc_tools is handy to scan the reader for a card
      • debian utility 'start-stop-daemon' to start/stop pcsc
      • /usr/bin/cardos-info
      • /usr/bin/cryptoflex-tool
      • /usr/bin/eidenv
      • /usr/bin/netkey-tool
      • /usr/bin/opensc-tool
      • /usr/bin/opensc-explorer
      • /usr/bin/piv-tool
      • /usr/bin/pkcs11-tool
      • /usr/bin/pkcs15-crypt
      • /usr/bin/pkcs15-init
      • /usr/bin/pkcs15-tool
      • logviewer
    • 'opensc' tools, which depend upon
    • 'libopensc2' and
    • 'libopenct1'
    There is
    • pcscd, implementing pcsclite, coordinates the loading of drivers. Use Synaptic to identify its files, doc is in '/usr/share/doc/pcscd'. According to man pcscd, for USB drivers '/etc/reader.conf' is not used (but it's not explained what is used). Some info is in 'man update-reader.conf'.
      • opencryptoki, implementing the PKCS#11 API, interfacing to the underlying tokens, it is supported by:
        • pkcs11_startup, initialising the contents of pk_config_data, normally run from a start-up script
        • pkcsslotd, daemon managing PKCS#11 objects between PKCS#11 enabled applications
        • pkcsconf can be used to further configure opencryptoki once the daemon is running *** 'pkcsconf -i' for info *** fails ....
        • pk_config_data

    BeID on Debian

    Refer to Belpic info.

    BeID on Angkor - Legacy - Debian Lucid

    With Lucid came beid-tools and beidgui. You see this under Synaptic. The ACR38 reader is recognized, the beidgui tool starts, but reading a card fails with "wrong root certificate". According to "https://bugs.launchpad.net/ubuntu/+source/belpic/+bug/546366", this is because the Ubuntu reposiroty for Lucid contains beid software that is too old, version is "2.6.0-7ubunt1" for both.

    The solution: download deb package from "http://eid.belgium.be/nl/Hoe_installeer_je_de_eID/Linux/". I stored it in "/home/marc4/Downloads", it's called "eid-mw_4.0.0r925_amd64_tcm147-132618.deb". This raises the question: what is inside this deb package?

    Do: "dpkg -c packagename". This displays all the files, but nothing comparable to "2.6.0-7ubunt1".

    How will it interact with the old beid-tools and beidgui? Let's try.

    RUN1 "sudo dpkg -i eid-mw_4.0.0r925_amd64_tcm147-132618.deb"

    Results in Selecting previously deselected package eid-mw. dpkg: considering removing libbeidlibopensc2 in favour of eid-mw ... dpkg: no, cannot proceed with removal of libbeidlibopensc2 (--auto-deconfigure will help): libbeid2 depends on libbeidlibopensc2 (>= 2.6.0) libbeidlibopensc2 is to be removed. dpkg: regarding eid-mw_4.0.0r925_amd64_tcm147-132618.deb containing eid-mw: eid-mw conflicts with libbeidlibopensc2 libbeidlibopensc2 (version 2.6.0-7ubuntu1) is present and installed. dpkg: error processing eid-mw_4.0.0r925_amd64_tcm147-132618.deb (--install): conflicting packages - not installing eid-mw Errors were encountered while processing: eid-mw_4.0.0r925_amd64_tcm147-132618.deb

    RUN2 "sudo dpkg -i eid-mw_4.0.0r925_amd64_tcm147-132618.deb --auto-deconfigure"

    Problem persists. Now did a manual remove via Synaptics of all installed beidlibs and related. RUN3 same as RUN2 but now with OK ending. Check in Synaptics: beidstuff in "old" repositories is visible but not installed. Manually installed deb package is apparently not visible. However, as I uninstalled beidgui and beid-tools - these were not included in the .deb package. So they are no longer present. TRY sudo apt-get install beidgui, which results in terrifying messages: The following extra packages will be installed: beid-tools libbeid2 libbeidlibopensc2 The following packages will be REMOVED eid-mw The following NEW packages will be installed beid-tools beidgui libbeid2 libbeidlibopensc2 So this would reinstall what I just removed etc etc. Not a good plan.

    TRY Info from http://grep.be/blog/en/computer/debian/belpic/ Download "eid-viewer_4.0.0r52_amd64.deb" from http://code.google.com/p/eid-viewer/downloads/list.

    Then "sudo dpkg -i eid-viewer_4.0.0r52_amd64.deb". Goes ok.

    To run "eid-viewer".

    Documentation in /usr./share/eid-viewer. Viewer works fine. TaxOnWeb fails with "SSL peer was unable to negotiate an acceptable set of security parameters.(Error code: ssl_error_handshake_failure_alert)"

    What do I need to do more to register the PKCS11 device? From /usr/share/doc/eid-mw's README: To use the Belgian eID in Firefox, we recommend the Firefox extension to handle configuration automatically. The extension will be installed on Linux and OSX. The default install locations: - Linux: DATADIR/mozilla/extensions/{ec8030f7-c20a-464f-9b0e-13a3a9e97384} (DATADIR is by default PREFIXDIR/lib - PREFIXDIR is by default /usr/local) Google points to: "https://addons.mozilla.org/en-US/firefox/addon/belgium-eid/".

    Install and restart firefox.

    BeID on Angkor (legacy info from Karmic Koala)

    Install
    • beid-tool
    • beidgui
    After installation of these two packages, 'lsusb' results in 'Bus 007 Device 003: ID 072f:9000 Advanced Card Systems, Ltd ACR38 AC1038-based Smart Card Reader'. The 'beidgui' is callable from from the KDE launcher but fails with 'unknown errorcode'. But: 'Please read the README.Debian file in the 'libbeidlibopensc2' package for information on setting up your system so that it can read from smartcards.' This points to installing libacr38u and pcscd. Tools:
    • 'id|grep scard' should produce output (otherwise you may not have required authorisation) *** so try to use beid-tool after su ...(later found out I can use my eID with 'id|grep scard' still returning nothing).
    • beid-tool
      • beid-tool -l to list readers
      • beid-tool -a to list atr
      • beid-tool -n to read name of card
      • beid-tool -
    • beid-pkcs11-tool
      • beid-pkcs11-tool -I to show info
      • beid-pkcs11-tool -L to list slots
      • beid-pkcs11-tool -O to to list objects
      • beid-pkcs11-tool -M to list mechanisms supported
    • /usr/share/beid/beid-pkcs11-register.html for registering in Mozilla
    Originally, most utilities result in 'cannot connect to X server'... so try to natively login as root and startx. However, reboot system, and the CLI-tools works under marc3. Tried to register '/usr/lib/libbeidpkcs11.so.2' under 'preferences/advanced/security device'. PROBLEM: still not possible to register my own certs. The beidgui (accessible via KDE launcher) still gives 'wrong root certificate'. This according to eid.belgium.be is due to using too old middleware. Nice:
    • #stop pcscd (running in background)
    • sudo /etc/init.d/pcscd stop
    • #run in foreground.
    • sudo pcscd --apdu --foreground --debug *** does not seem to work ...

    Installing smart card support on BlackBetty

    On BlackBetty - pcscd and pcsc-tools installation

    On BlackBetty, only libpcsclite1 (essentially '/usr/lib/libpcsclite.so.1.0.0') was already installed wrt pcsc. So I added pcscd and pcsc-tools (which depended up libpcsc-perl). Installation ok. sr/lib/pcsc/drivers/ACR38UDriver.bundle Let's check: pcscd is normally started at boot time from /usr/etc/init.d/pcscd. But even without rebooting, 'ps -ef' shows me there is a pcscd up and running. Debian has the 'start-stop-daemon' tool, so you can:
    • 'sudo start-stop-daemon --name pcscd --stop'
    • 'sudo start-stop-daemon --exec /usr/sbin/pcscd --start'
    Use the GUI tool system / administration / logviewer to see that indeed the daemon was killed/started. The toolset pcsc-tools contains:
    • pcsc_scan scans available smart card readers and print detected events: card insertion with ATR, card removal;
    • ATR_analysis analyses a smart card ATR (Answer To Reset)
    • scriptor Perl script to send commands to a smart card using a batch file or stdin - see 'man scriptor'
    • gscriptor the same idea as scriptor.pl(1) but with Perl-Gtk GUI - NICE - command file is identical to scriptor
    Ref to http://ludovic.rousseau.free.fr/softwares/pcsc-tools/

    reader driver for ACR38

    Installed package libacr38u. This is reflected in a driver under pcsc ('usr/lib/pcsc/drivers/ACR38UDriver.bundle'). You can now plug in the reader, insert a card, and run pcsc_scan. OK.

    OpenSC

    Doc in '/usr/share/doc/opensc/index.html'. Depending upon its libopensc2 libraries ... Utilities
    • /usr/bin/cardos-info
    • /usr/bin/cryptoflex-tool
    • /usr/bin/eidenv - reads out standard BeID data - (you may have to cache via pkcs15-tool -L)
    • /usr/bin/netkey-tool
    • /usr/bin/opensc-tool
    • /usr/bin/opensc-explorer
    • /usr/bin/piv-tool
    • /usr/bin/pkcs11-tool - careful since BeID is not standard PKCS#11 for signature key (requires GUI pop-up/PIN every time)
    • /usr/bin/pkcs15-crypt
    • /usr/bin/pkcs15-init
    • /usr/bin/pkcs15-tool - particularly useful since BeID is PKCS#15 - read '/usr/share/doc/opensc/index.html'
    Quick diagnostic: insert reader and eid. Then 'pkcs15-tool -D' to dump PKCS15 objects visible. Then
    • pcsc_scan - so you see pcscd is alive and has your card
    • eidenv - quick readout of your BeID
    • pkcs15-tool -D - dumps available objects - e.g. ID 06 is root cert, ID 04 is operational CA, 03 and 02 are personal
    • pkcs15-tool -r 06 shows the root cert, in .pem format
    • 'pkcs15-tool -r 06 -o 001-belgianroot.pem' exports the cert to file *** but Netscape wants PKCS12
    So 'openssl pkcs12 ....' might help. Do 'man pkcs12'. for example 'openssl pkcs12 -export -in file.pem -out file.p12 -name MyCertificate in practices: 'openssl pkcs12 -export -in 001-belgianroot.pem -out file.p12 -name 001-belgianroot': 'unable to load private key'. Sure, I don't have that in a pem cert....

    Oddly enought: openssl doc states quote -in filename The filename to read certificates and private keys from, standard input by default. They must all be in PEM format. The order doesn't matter but one private key and its corresponding certificate should be present. If additional certificates are present they will also be included in the PKCS#12 file. unquote

    So BUT ONE PRIVATE KEY AND ITS CORRESPONDING CERT SHOULD BE PRESENT. Seems odd to me, so you can't simply convert a root cert in pem to a root cert in pkcs12. Does this mean the content is changing too much to maintain the structure/signature? Peek inside .pem cert: 'openssl x509 -in 001-belgianroot.pem -noout -text' You can also convert from pem to der with openssl x509.... So the final wayforward may be:
    • generate keypair on own gemplus card
    • extract pubkey and turn it into a cert
    • import cert under pkcs11 under netscape
    • then use beid package and register belgian pkcs#11 module...

    IX.121 JonDo

    Installing

    On Angkor, use the manual instructions from https://anonymous-proxy-servers.net/en/help etc.... You have to add the repository to your /etc/apt/sources.list, then download and check the pubkey, and do an aptitude install. There are two components:
    • jondo - the proxy running on your local hostbased
    • jondofox - the firefox profile that uses this proxy to surf
    After the install, you need to "sudo jondo" to complete the installation. You can find all files and docpointers in Synaptics.

    TOR

    Installing on Linux

    • Tor Browser Launcher
    • Torbrowser-launcher also includes an AppArmor profile which only allows it to read and write to a minimum of places: torbrowser-launcher/torbrowser.Browser.firefox at develop · micahflee/torbrowser-launcher · GitHub
      • Intended to make the Tor Browser Bundle (TBB) easier to maintain and use for GNU/Linux users. torbrowser-launcher handles downloading the most recent version of TBB for you, in your language and for your architecture. It also adds a "Tor Browser" application launcher to your operating system's menu.
      • When you first launch Tor Browser Launcher, it will download TBB from https://www.torproject.org/ and extract it to ~/.local/share/torbrowser, and then execute it.
      • Cache and configuration files will be stored in ~/.cache/torbrowser and ~/.config/torbrowser.
      • Each subsequent execution after installation will simply launch the most recent TBB, which is updated using Tor Browser's own update feature.
    • Tor Browser
  • Cache and configuration files are stored in ~/.cache/torbrowser and ~/.config/torbrowser.
  • Launch options: --verbose etc

    Installing on GrayTiger

    • Some time ago installed from deb pkg, since dpkg --list 'tor*' shows many files.
    • However, I also find this installation path on GrayTiger: /home/marc/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/TorBrowser.

    To see what tor is installed as deb pkg: dpkg --list 'tor*'

    Returns:
    ii  tor                 0.4.5.10-1~deb11u1 amd64        anonymizing overlay network for TCP
    un  tor-arm             <none>             <none>       (no description available)
    ii  tor-geoipdb         0.4.5.10-1~deb11u1 all          GeoIP database for Tor
    ii  torbrowser-launcher 0.3.3-6            amd64        helps download and run the Tor Browser Bundle
    ii  torsocks            2.3.0-3            amd64        use SOCKS-friendly applications with Tor
    
    To list what files are installed, use 'dpkg -L tor' which returns
    /etc
    /etc/apparmor.d                                         => apparmor stuff!
    /etc/apparmor.d/abstractions
    /etc/apparmor.d/abstractions/tor
    /etc/apparmor.d/system_tor
    /etc/cron.weekly
    /etc/cron.weekly/tor
    /etc/default
    /etc/default/tor
    /etc/init.d
    /etc/init.d/tor
    /etc/logrotate.d
    /etc/logrotate.d/tor
    /etc/runit
    /etc/runit/runsvdir
    /etc/runit/runsvdir/default
    /etc/sv
    /etc/sv/tor
    /etc/sv/tor/.meta
    /etc/sv/tor/.meta/installed
    /etc/sv/tor/log
    /etc/sv/tor/log/run
    /etc/sv/tor/run
    /etc/tor
    /etc/tor/torrc
    /lib
    /lib/systemd
    /lib/systemd/system
    /lib/systemd/system/tor.service
    /lib/systemd/system/tor@.service
    /lib/systemd/system/tor@default.service
    /lib/systemd/system-generators
    /lib/systemd/system-generators/tor-generator
    /usr
    /usr/bin
    /usr/bin/tor
    /usr/bin/tor-gencert
    /usr/bin/tor-print-ed-signing-cert
    /usr/bin/tor-resolve
    /usr/bin/torify
    /usr/sbin
    /usr/sbin/tor-instance-create
    /usr/share
    /usr/share/doc
    /usr/share/doc/tor
    /usr/share/doc/tor/NEWS.Debian.gz
    /usr/share/doc/tor/README.Debian
    /usr/share/doc/tor/changelog.Debian.gz
    /usr/share/doc/tor/changelog.gz
    /usr/share/doc/tor/copyright
    /usr/share/doc/tor/tor-exit-notice.html
    /usr/share/doc/tor/tor-gencert.html
    /usr/share/doc/tor/tor-print-ed-signing-cert.html
    /usr/share/doc/tor/tor-resolve.html
    /usr/share/doc/tor/tor.html
    /usr/share/doc/tor/torify.html
    /usr/share/doc/tor/torrc.sample.gz
    /usr/share/lintian
    /usr/share/lintian/overrides
    /usr/share/lintian/overrides/tor
    /usr/share/man
    /usr/share/man/man1
    /usr/share/man/man1/tor-gencert.1.gz
    /usr/share/man/man1/tor-print-ed-signing-cert.1.gz
    /usr/share/man/man1/tor-resolve.1.gz
    /usr/share/man/man1/tor.1.gz
    /usr/share/man/man1/torify.1.gz
    /usr/share/man/man5
    /usr/share/man/man8
    /usr/share/man/man8/tor-instance-create.8.gz
    /usr/share/runit
    /usr/share/runit/meta
    /usr/share/runit/meta/tor
    /usr/share/runit/meta/tor/installed
    /usr/share/tor
    /usr/share/tor/tor-service-defaults-torrc
    /usr/share/tor/tor-service-defaults-torrc-instances
    /var
    /var/log
    /var/log/runit
    /var/log/runit/tor
    /etc/sv/tor/log/supervise
    /etc/sv/tor/supervise
    /usr/sbin/tor
    /usr/share/man/man5/torrc.5.gz
    

    Installing on Angkor2

    On Angkor2, in /home/downloads/tor-browser_en_US subdir. To start: dolphin, cd to that directory, cd to subdir tor-browser_en-US, doubleclick. Apparently manually executing "./start-tor-browser" does not always work.

    Installing on Windows

    On Windows: from torproject website. TOR browser available from the menu.

    Tor start-up

    Start-up on GrayTiger

    Alternative 1: can be started through Gnome.

    Alternative 2: manual start: 'cd /home/marc' then 'tor'. Problem that is reported may be due to some preference to start through Gnome.

     tor
    Dec 16 18:20:44.744 [notice] Tor 0.4.5.10 running on Linux with Libevent 2.1.12-stable, OpenSSL 1.1.1n, Zlib 1.2.11, Liblzma 5.2.5, Libzstd 1.4.8 and Glibc 2.31 as libc.
    Dec 16 18:20:44.744 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
    Dec 16 18:20:44.745 [notice] Read configuration file "/etc/tor/torrc".
    Dec 16 18:20:44.746 [notice] Opening Socks listener on 127.0.0.1:9050
    Dec 16 18:20:44.746 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running?
    Dec 16 18:20:44.746 [warn] Failed to parse/validate config: Failed to bind one of the listener ports.
    Dec 16 18:20:44.746 [err] Reading config failed--see warnings above.
    
    To start manually, you could try to first reboot GrayTiger. Alternative 3: cd /home/marc/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/TorBrowser. There you find the start-tor-browser.desktop file, which reads:
    # This file is a self-modifying .desktop file that can be run from the shell.
    # It preserves arguments and environment for the start-tor-browser script.
    #
    # Run './start-tor-browser.desktop --help' to display the full set of options.
    #
    # When invoked from the shell, this file must always be in a Tor Browser root
    # directory. When run from the file manager or desktop GUI, it is relocatable.
    #
    # After first invocation, it will update itself with the absolute path to the
    # current TBB location, to support relocation of this .desktop file for GUI
    # invocation. You can also add Tor Browser to your desktop's application menu
    # by running './start-tor-browser.desktop --register-app'
    #
    # If you use --register-app, and then relocate your TBB directory, Tor Browser
    # will no longer launch from your desktop's app launcher/dock. However, if you
    # re-run --register-app from inside that new directory, the script
    # will correct the absolute paths and re-register itself.
    #
    # This file will also still function if the path changes when TBB is used as a
    # portable app, so long as it is run directly from that new directory, either
    # via the shell or via the file manager.
    
    Launch options: --verbose etc

    Start-up on Angkor2

    Legacy.

    Tor logs

    An error message may appear and you can select the option to "copy Tor log to clipboard". Then paste the Tor log into a text file or other document.

    If you don't see this option and you have Tor Browser open, you can navigate to the hamburger menu ("≡"), then click on "Settings", and finally on "Connection" in the side bar. At the bottom of the page, next to the "View the Tor logs" text, click the button "View Logs...".

    Alternatively, on GNU/Linux, to view the logs right in the terminal, navigate to the Tor Browser directory and
    • launch the Tor Browser from the command line by running:‪./start-tor-browser.desktop --verbose
    • Or to save the logs to a file (default: tor-browser.log): ‪./start-tor-browser.desktop --log [file]
    More information on this can be found on the Support Portal.

    Handling magnets

    When you select a magnet from piratebay, Tor/Firefox has by default no protocol handler for magnets. In Firefox, you can enter 'about:config' as url, and then you can add 'network.protocol-handler.expose.magnet' ... but it does not work for me. So: just open the magnet in another tab, copy it, paste it in Ktorrent.

    Where to go - onion sites

    List with onion sites: separate file.

    I2P

    Installing

    On Angkor, use the manual instructions from I2P2.de/debian. You have to add the repository to your /etc/apt/sources.list, etc. On Windows ... Then
    • start the I2P router "i2prouter start" (no sudo) - the proxy running on your local hostbased
    • this gets you a console in your browser - at http://127.0.0.1:7657
    • configure your browser to go to proxy on ports 4444 and 4445 (http/s)
    • added entries in C4 LAN firewall for 4444/4445 traversals... but does this help?

    Locate torrent as magnet in Postman or Welterde, copy it over to I2PSnark and start the torrent there. Downloads are shown in the application window, right-mouseclick to save them. Finding eepsites: installation is in var/lib/i2p, where you find i2p-config, eg an addressbook and doc.

    A-SIT cert tool

    Installation on GrayTiger

    Downloaded from https://technology.a-sit.at/zertifikats-status/. Installed in /home/marc/Downloads/asit-cert-tool. Modified start.sh by inserting right java path.

    Torrents

    GrayTiger: Synaptics: KTorrent, a BitTorrent peer-to-peer network client, that is based on the KDE platform. Start via Gnome GUI, copy magnet from torrent site (e.g. limetorrents.lol via Tor).

    Applications - semantics

    Protege

    Protege on GrayTiger

    Downloaded and installed in /home/marc/Downloads/Protege-5.5.0. Run via /home/marc/Downloads/Protege-5.5.0/run.sh.

    OpenRefine/OntoRefine

    OpenRefine/OntoRefine on GrayTiger

    Installation on GrayTiger

    Downloaded in /home/marc/Downloads/ontotext-refine_1.0.0-1_amd64.deb. Installed as 'sudo apt-get install localpathtodebfile'. Run via icon/favorites. Some config in '/home/marc/.ontorefine'.

    Demofile loaded in '/home/marc/Documents/Beta/101-SME/010 Semantics/405 OntoRefine - OpenRefine/OntoRefine/Nederlands_restaurants.csv'

    Functionality

    Import, clean, export in formats such as RDF XML, Turtle.

    GraphDB

    GraphDB on GrayTiger

    Installation

    Installation OK - 2022-11 platform indep release as suggested
    • Download as per email from https://download.ontotext.com/owlim/eda2386e-1173-11ed-a373-42843b1b6b38/graphdb-10.0.2-dist.zip into /home/marc/Downloads
    • Extraction in /home/marc/Downloads/graphdb-10.0.2
    • Run the graphdb script in the bin directory.
    • Error ...
      • Exception in thread "main" org.apache.catalina.LifecycleException: Protocol handler initialization failed
      • Caused by: java.net.BindException: Address already in use
      • => typically graphdb is already running...stop/reboot/retry
    • To access the Workbench, open local url.
    • Logs go in local url (/home/marc/Downloads/graphdb-10.0.2/logs)
    Installation attempt #1 LEGACY 2022-08 - graphdb-desktop_10.0.2-1_amd64.deb
    • Download the GraphDB Desktop .deb or .rpm file. Done in /home/marc/Downloads
    • Install the package with sudo dpkg -i or sudo rpm -i and the name of the downloaded package. Alternatively, you can double-click the package name.
      • Error:
    • root@GrayTiger:/home/marc/Downloads# dpkg -i /home/marc/Downloads/graphdb-desktop_10.0.2-1_amd64.deb
    • dpkg-deb: error: archive '/home/marc/Downloads/graphdb-desktop_10.0.2-1_amd64.deb' uses unknown compression for member 'control.tar.zst', giving up
    • dpkg: error processing archive /home/marc/Downloads/graphdb-desktop_10.0.2-1_amd64.deb (--install):
    • dpkg-deb --control subprocess returned error exit status 2
    • Errors were encountered while processing: /home/marc/Downloads/graphdb-desktop_10.0.2-1_amd64.deb
  • Solution:
    • Debian seems to lack support for zst compression see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=892664
  • Start GraphDB Desktop by clicking the application icon. The GraphDB Workbench opens at http://localhost:7200/.
  • Luben Karaslavov To Marc Sel Cc graphdb-support@ontotext.com Date 2022-08-05 16:02

    Hello Marc, I will report the issue to the dev team about the .deb packages being packaged as .zst which isn't supported by the debian dpkg command. In the meantime you can try using the platform independent release of GraphDB that can be found here: https://download.ontotext.com/owlim/eda2386e-1173-11ed-a373-42843b1b6b38/graphdb-10.0.2-dist.zip

    To run it you'll need a java runtime environment version 11+ which you can install with: sudo apt install openjdk-11-jre

    With it installed, you just to extract it, navigate to the bin folder and use the `graphdb` startup script. Regards, Luben
    LEGACY - Installation attempt # 2 2022-11 graphdb-desktop_10.1.0-1_amd64
    Check out Ontotext download site: the version offered for download now is graphdb-desktop_10.1.0-1_amd64 (so _10.1 instead of _10.0). Same error.

    GraphDB start-up

    • Use terminal to cd to /home/marc/Downloads/graphdb-10.0.2
    • Run the graphdb script in the bin directory.
    • To access the Workbench, open local url.
    • Logs go in local url (/home/marc/Downloads/graphdb-10.0.2/logs)

    GraphDB data load

    Steps:
    • Download e.g. file from data.world. Such file can be 5 GB, so regular load fails.
    • Solution: move ttl file such as L1Data20221112.ttl to /home/marc/graphdb-import. As a consequence the file shows up in Workbench/Server Files and can be imported there.
    • So you can do an import of server file.

    Adding equivalentClass statements to GraphDB

    Approach 1: edit the .ttl file with gedit. But if this is 5 GB... Alternative editor: joe.

    Applications - graphics

    X.101 Xv

    In order to display e.g. PowerPoint slides:
    - save them as JPEG
    - transfer them to Linux
    - start Xv & load the JPEG
    - use shift-space to move to the next slide
    - use shift-backspace to move back to the previous one
    - use < and > to increase/decrease screen size.

    Remember the Visual Schnauzer (cntl-v) gives you the 'thumbnails'.

    Documentation can be found in /usr/doc. This is in PostScript format, so you can use GhostView to read it.

    Gimp

    Installation

    Install via Synaptics.

    Basics

    Image basics

    Images files

    • Images are the basic entities, an “image” corresponds to a single file, such as a TIFF or JPEG file.
    • Gimp has its native image format XCF.
    • Gimp UI 'image/properties' shows the image type.

    Image concept

    • You can also think of an image as corresponding to a single display window (although in truth it is possible to have multiple windows all displaying the same image).
    • It is not possible to have a single window display more than one image, though, or for an image to have no window displaying it.
    • Instead of thinking of it as a sheet of paper with a picture, think of it as a stack of 'layers'.
    • In addition to a stack of layers, a GIMP image may contain
      • a selection mask,
      • a set of channels, and
      • a set of paths.
      • 'parasites', arbitrary pieces of data attached to an image.

    Image resolution

    • Digital images consist of a grid of square pixels. Each image has a size measured in two dimensions, such as 900 pixels wide by 600 pixels high.
    • Printing and resolution
      • Pixels don't have a set size in physical space. To set up an image for printing, we use a value called resolution, defined as the ratio between an image's size in pixels and its physical size (usually in inches) when it is printed on paper.
      • Most file formats (but not all) can save this value of resolution, which is expressed as ppi—pixels per inch.
      • Images imported from cameras and mobile devices tend to have a resolution attached to the file. The resolution is usually 72 or 96ppi.
      • When printing a file, the resolution determines the size the image will have on paper, and as a result, the physical size of the pixels. The same 900x600 pixel image may be printed as a small 3x2" card with barely noticeable pixels—or as a large poster with large, chunky pixels.
      • Changing resolution for printing
        • An image may have e.g. following properties
          • Size in pixels: 1879 × 1338 pixels
          • Print size: 655.9 × 467.0 millimetres
          • Resolution: 72.7684 × 72.7684 ppi
        • A printer/printshop may support e.g. Albelli: 15 x 10 cm or 19 x 13 cm
        • So save a copy by ...

    • Displaying images on-screen
      • It is important to realize that the resolution attached to an image imported from a camera or mobile device is arbitrary and was chosen for historic reasons.
      • You can always change the resolution with GIMP—this has no effect on the actual image pixels. Furthermore, for uses such as displaying images online, on mobile devices, television or video games—in short, any use that is not print—the resolution value is meaningless and is ignored. Instead, the image is usually displayed so that each image pixel conforms to one screen pixel.

    Channels

    • A Channel is a single component of a pixel's color.
      • For a colored pixel in GIMP, these components are usually Red, Green, Blue and sometimes transparency (Alpha).
      • For a Grayscale image, they are Gray and Alpha and for an Indexed color image, they are Indexed and Alpha.
    • The entire rectangular array of any one of the color components for all of the pixels in an image is also referred to as a Channel. You can see these color channels with the Channels dialog.
    • When the image is displayed, GIMP puts these components together to form the pixel colors for the screen, printer, or other output device. Some output devices may use different channels from Red, Green and Blue. If they do, GIMP's channels are converted into the appropriate ones for the device when the image is displayed.

    Layers

    • Layers need not be opaque, and they need not cover the entire extent of an image, so when you look at an image's display, you may see more than just the top layer: you may see elements of many layers.

    User interface

    • gimp main windows Gimp can be in single or multi-window mode, toggle via 'Windows'
    • gimp toolbox
      • Includes: color area, shows GIMP's basic palette, consisting of two colors, the Foreground and Background, used for painting, filling, and many other operations.

    Selections

    • Quick Gimp can be in single or multi-window mode, toggle via 'Windows'
    • gimp toolbox
    • When modifying an image, you may want only a part of the image to be affected. The 'selection' mechanism makes this possible.
    • Each image has its own selection, which you normally see as a moving dashed line separating the selected parts from the unselected parts (the “marching ants” ). Actually this is a bit misleading: selection in GIMP is graded, not all-or-nothing, and really the selection is represented by a full-fledged grayscale channel.
    • The dashed line that you normally see is simply a contour line at the 50%-selected level.
    • At any time, though, you can visualize the selection channel in all its glorious detail by toggling the QuickMask button.

  • intelligent scissors
  • foreground selection tool
  • GhostView

    GhostView can be used to view PostScript, Microsoft Documents, ...

    xpdf / legacy PDF Reader

    Prior to SuSE 6.4, xpdf needed to be installed to view files in Acrobat reader format (*.pdf). If xpdf does not come up with a menu use the right mouse button, or run it "xpdf /filename.pdf".

    Go Okular.

    POVray

    POVray and x-povray (provided in /usr/X11R6/bin/... - /usr/lib/povray ... )

    Blender

    Refer to BTK1.html

    thumbnail

    Simple command to create tiff thumbnails.

    Applications - security

    Satan

    Refer to history log.

    ISS/SafeSuite

    Cfr CD or www.iss.net. Key prerequisites include:
    • Operating Linux machine, with full logging enabled
    • ISP link up and running (modem, outgoing line, minicom, dns, ppp & routing configured, ISP account, target "pingable")
    • X operational (for ISS configuration - you'll need an "iss.key"file - but you can always test your ISS against localhost)
    • Netscape operational (for ISS reports & vulnerability database)
    • Appropriate iss configuration, matching your objectives.
    Key ISS files include:
    • iss (the executable)
    • xiss (the X version of iss)
    • iss.key (the keyfile, if you want to scan anything but localhost)
    • iss.config (the configuration file, which you influence via the config script) - config-gen (the config script)
    • iss.log (the full logfile of iss)

    nmap

    From www.insecure.org/nmap. Check out /usr/doc/packages/nmap/docs. Usage includes:
    • nmap -h (nice overview, complements man nmap)
    • nmap -sP localhost (scans with ping)
    • nmap -sT localhost (TCP connect port scan)
    • nmap -sU localhost (UDP scan)
    • nmap -F ... (fast scan)
    • nmap -A ... (aggressive scan)
    Nice demo: 'nmap -sT localhost' - crosscheck output with /etc/inetd.conf, modify the inetd.conf, rerun the 'nmap'. Allows 4 stealth modes: SYN (SYN flag set), FIN (FIN flag set), XMAS (FIN, URG, PUSH flags set and NULL (all flags off). Allows "decoy" mode: e.g. "nmap -s5 -P0 -Ddecoy1.com,ME,decoy2.com,decoy3.com myhost.com".

    nc netcat

    Apparently originally written by hobbit@avian.org (remember CACA...). Documentation in /usr/doc/packages/netcat/README or /usr/share... http://netcat.sourceforge.net/ See Simplest use is 'netcat host port'. Unlike e.g. telnet, netcat will not interfere with or interpret the data stream in any way. Also:
    • allows you to dump the stream into a logfile (-o logfile)
    • allows you to shove a file towards a connecting process
    • can be used as a simple port scanner, scripter, ...

    Queso

    From "www.apostols.org/projectz/queso". Identifies about 100 OS's.

    4g8

    4g8 for traffic capturing in a switched environment

    Linux as a packet filter

    Various elements evolved over time:
    • Firewall HOWTO (1996)
    • Ipchains HOWTO (1998)
    • Bridge+Firewall Mini-HOWTO (1997)
    • Cipe+Masquerading Mini-HOWTO (1998): VPN
    • Firewall piercing Mini-HOWTO (1998)
    • VPN Mini-HOWTO (1997)
    The Linux kernel can be instructed to do packet filtering, if you compile in the right options. Forwarding and logging will then be managed via ipfw and ipfwadm. Refer to the man pages. Later, IPchains was a rewrite of the firewalling code, and of ipfwadm. You can also run a proxy server on Linux, using e.g. SOCKS (a single utility to cover all protocols, one daemon and one config file) or the TIS fwtk (one utility per protocol).

    IPchains on malekh

    Check out:
    • /usr/doc/how/en/NET-3 (contains a section on IPchains)
    • /usr/doc/howto/en/ipchains
    As the IPchains HOWTO explains, you have ipchains if you find /proc/net/ip_fwchains. I do have this on malekh, and the content reads:
    • input 00000000/00000000->C0000001/FFFFFFFF - 10 0 0 0 0 0 0 0-65535 0-65535 AFF X00 00000000 0 0 -
    • output C0000001/FFFFFFFF->00000000/00000000 - 10 0 0 0 0 0 0 0-65535 0-65535 AFF X00 00000000 0 0 -
    How does this work? Read:
    • the HOWTO
    • the source at "/usr/src/linux/net/ip_fw.c"
    • man ipchains
    Issuing "ipchains -L" (for list) reveals the current set-up.

    Kismet - DROPPED

    On BlackBetty

    From Ubuntu hardware tests determine network hardware:

    Broadcom Corporation BCM4312 802.11b/g (rev 01) Realtek Semiconductor Co., Ltd. RTL8101E PCI Express Fast Ethernet controller (rev 02)

    So /etc/kismet/kismet.conf should have 'bcm43xx' or 'b43' or 'b43legacy' as capture source. But then: marcsel@BlackBetty:~$ sudo kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. Source 0 (mlsbbitf): Enabling monitor mode for bcm43xx source interface eth1 channel 6... FATAL: Failed to set monitor mode: Invalid argument. This usually means your drivers either do not support monitor mode, or use a different mechanism for getting to it. Make sure you have a version of your drivers that support monitor mode, and consult the troubleshooting section of the README. marcsel@BlackBetty:~$

    Tried via 'system' 'administration' 'hardware drivers' to activate a proprietary broadcom driver.... Does not solve the problem....

    Wireshark

    To find out which interfaces you can capture: ifconfig, or 'sudo wireshark -D'. Results eg in 1. sms 2. eth0 3. eth1 4. any (Pseudo-device that captures on all interfaces) 5. lo Ifconfig will show that eth1 is active. Start-up: can only be run by root, hence "sudo wireshark -D" to display available itfs. And "sudo wireshark" to start. Documentation Doc and FAQ in "/usr/share/wireshark/wireshark/help". Man pages in "/usr/share/wireshark/wireshark/wireshark.html" Working with wireshark Wireshark's native capture file format is libpcap format, which is also the format used by tcpdump and various other tools. Use "capture - options" or control-K to capture in promiscuous mode. Run "ifconfig" to identify your own IP address. My MAC is according to ifconfig for eth1: 90:4c:e5:1a:ad:85. Sample captures saved on BlackBetty in MLS-wireshark.

    Nessus

    Nessus 2.2.9 (current is 4.2) - client/server Executables: in /usr/bin Documentation: /usr/share/doc/nessus - mainly about installing and starting /usr/share/doc/nessusd Plugins: /usr/share/doc/nessus-plugins - some contents on plugin structure Downloadable pdf usermanual on nessus.org website How to start using Nessus? * Set up the server certificate with `nessus-mkcert': OK * Set up a user with `nessus-adduser' : ok "nexus", psw = * Set up the client certificate with `nessus-mkcert-client': ok msg: >>>Your client certificates are in /tmp/nessus-mkcert.5883 >>> You will have to copy them by hand >>>mls: to where?>>>mls: ok seems not necessary. * Run `nessusd -D' in order to start the daemon. * Change back from root to normal user, run X and start `nessus' (or select it from the menu, it's in Apps/System submenu). Tell the client to connect to localhost. It will ask you for a username and password. Enter the user/password you set up with nessus-adduser, and off you go. Registering Nessus: Registered as marc.louis.sel@gmail.com. Got activationcode back via email. Registration OK. Updating plugins is manually, via "nessus-update-plugins" command. Using Nessus Website provides documentation for version 4.2, but what installs on BlackBetty is an old 2.2.9.

    BackTrack

    Pristine sources:
    • http://www.backtrack-linux.org
    • http://www.remote-exploit.org (old)
    Installed as per instructions. Bootable copies on USB and CD. Wireless interface works on old PwC Compaq, and not on Dell Inspiron Mini BlackBetty. Reason is documented on website: the Broadcom wireless chipset of the Dell is not supported.

    Network is not started automatically (stealth). This does not stop the capturing radiotools etc to work. Use "lshw" to learn HP EliteBook 8440p has a wireless Centrino Advanced-N 6200, called wlan0. Supports IEEE 802.11abgn, promiscuous, driver is iwlagn. Try also "nm-tool" on Ubuntu.
    • ifconfig Stop interface: "ifconfig wlan0 down", start with "ifconfig wlan0 up".
    • ipto manipulate routing, list with ip addr
    • iwconfig also for scanning
    Further:
    • Manual connecting to an (unprotected) Access Point:
    • sudo iwlist wlan0 scan
    • ifconfig wlan0 down (mind you: ifconfig)
    • ifconfig wlan0 up

    • iwconfig wlan0 mode managed (mind you: iwconfig)
    • iwconfig wlan0 channel 11 ----your channel
    • iwconfig wlan0 essid networkname----your essid
    • after these commands: still not associated.... and wireshark says there are no packets on wlan0 WHY?
    • tried iwconfog wlan0 essid linksys ap
    • "ALT: sudo iwconfig essid ap key mode <> commit" Where
      • is the itf such as wlan0
      • essid is ----your channel
      • ap is the MAC of the AP
      • key is the WEP key
      • mode is "Managed" for a regular client
      So e.g. "iwconfig eth0 ap 00:60:1D:01:23:45" FAILS
    Then: Radiotools include:
    • airmon-ng: puts wireless chip in monitoring mode
    • airbase-ng: for attacking clients rather than access points
    • airodump-ng: dumps file for aircrack, "airodump-ng -w filename eth1"
    • aircrack-ng: cracks WEP/WPA access, for cracking WPA preshared keys, a dictionary file needs to be provided
      • where to get it from? Ref: http://aircrack-ng.org/doku.php?id=faq#where_can_i_find_good_wordlists
      • how to get it into the USB-booted system? maybe via connecting to internet or email and downloading onto usb?

    LEGACY If you want explicit networking, start it with: "/etc/init.d/networking start". This results in DHCPrequests on eth0 i.e. the ethernet LAN itf. How about starting the wireless? Try iwconfig. First kill LAN using "ifdown eth0" etc. Then "ifup eth1" which is wireless. Running "iwconfig" will then illustrate that if there are no DHCP offers, the inteface will go from "radio off" to "unassociated". Which I interprete as "on" but "no IP address". If you do "iwlist wlan0 scan" you can see the ESSID and the RF quality info. If you see the ESSID but cannot get an IP address, that may be due to protection such as WPA.

    Kali

    Kali...

    Burp

    On Kali. See also burp basics.

    Getting started

    Start with HELP:
    • Documentation not available due to 'embedded browser error'. Use health check: Server error, inconsistency detected by ld.so: dl-lookup.c: 111 check_match: assertion 'version->filename==NULL||! etc ... failed...
    • Getting started: same error
    • Using burp: same error
    • Conclusion: use those on other platform, see burp basics....
    THIS MAY BE RELATED TO THE USE OF ROOT.

    Confirm your proxy listener is active

    • First, you need to confirm that Burp's proxy listener is active and working.
    • Go to the "Proxy" tab, then the "Options" sub-tab, and look in the "Proxy Listeners" section.
    • You should see an entry in the table with the checkbox ticked in the Running column, and "127.0.0.1:8080" showing in the Interface column. If so, please go to the section "Configure your browser to use the proxy listener" below.

    Configure your browser to use the proxy listener

    • Secondly, you need to configure your browser to use the Burp Proxy listener as its HTTP proxy server. To do this, you need to change your browser's proxy settings to use the proxy host address (by default, 127.0.0.1) and port (by default, 8080) for both HTTP and HTTPS protocols, with no exceptions.
    • This is browser specific
    • In Firefox, go to the Firefox Menu. Click on “Preferences” / "General". Scroll to the “Network” settings. Select the "Manual proxy configuration" option.
      • Enter your Burp Proxy listener address in the "HTTP Proxy" field (by default this is set to 127.0.0.1).
      • Next enter your Burp Proxy listener port in the "Port" field (by default, 8080).
      • Make sure the "Use this proxy server for all protocols" box is checked.
      • Delete anything that appears in the "No proxy for" field.
      • Now click "OK" to close all of the options dialogs.

    Installing Burp's CA Certificate in your browser

    By default, when you browse an HTTPS website via Burp, the Proxy generates an SSL certificate for each host, signed by its own CA certificate. This CA certificate is generated the first time Burp is run, and stored locally.

    To use Burp Proxy most effectively with HTTPS websites, you will need to install Burp's CA certificate as a trusted root in your browser.

    If you have not already done so, configure your browser to use Burp as its proxy, and configure Burp's Proxy listener to generate CA-signed per-host certificates (this is the default setting). Then use the links below for help on installing Burp's CA certificate in different browsers: .

    With Burp running, visit http://burp in your browser and click the "CA Certificate" link to download and save your Burp CA certificate. Import in Firefox CA certs. It will appear as PortSwigger, not as Burp.

    QUOTE Also: Note: If you install a trusted root certificate in your browser, then an attacker who has the private key for that certificate may be able to man-in-the-middle your SSL connections without obvious detection, even when you are not using an intercepting proxy. To protect against this, Burp generates a unique CA certificate for each installation, and the private key for this certificate is stored on your computer, in a user-specific location. UNQUOTE

    Trying to read this cert with openssl fails. OK imported in Firefox as trusted CA. Since DER may contain a chain and a private key - anything is possible now!

    Connecting

    Firefox now talks to the burp proxy. When i visit www.marcsel.eu (or google.com) nothing happens. Is the browser hanging? Is the proxy stopping the request from going out? Start/stop burp...OK.

    The tab proxy / intercept has a button: 'intercept is on/off'.

    WebScarab

    From owasp. Download from Sourceforge 'webscarab-installer-....jar'. Then "exec java -jar webscarab-installer ...". Goes into /home/marc4/WebScarab, start via "java -jar webscarab---.jar".

    Metasploit/Armitage

    Will not install on windows unless you deactivate the antivirus. Downloaded the community edition on Angkor and registered it. Then made the installer executable by "chmod +x metasploitinstallername.run", and sudo'd it. Create account, ref KTB. To start metasploit: cd /opt/metasploit, "sh ctlscript.sh start" (or stop). Point your browser to localhost, port 3790.

    Applications - Office & Multimedia

    Libre Office

    How to set a font in a document

    Then: how do you set a font in a document?
    • LibreOffice: HELP: see Tools - Options - LibreOffice - Fonts => nothing useful
      • Replacement table: to replace fonts by others
      • Fonts for html
    • LibreOffice
      • seems to have a preference to use 'Styles'
      • select some text, right-click, select 'paragraph' and you'll see fonts as an option of 'paragraph style'

    How to update the table of contents look

    How to update the look of the table of contents: edit the 'contents' styles.

    How to sign a document

    To sign, go /file/Digital Signatures in the Menu. On Linux this works with GnuPG. Which begs the question: how to get your Belgian eID certificate in GnuPG?

    xanim

    Plays a.o. avi and quick time files. Tried on imagine:
    1. which movie files do we have: 'locate *.mov'
    2. try "xanim /Kassandra_Data/Images/Hubble/....."
    3. "XAnim rev 2.80.0"
    4. fails with "video present but not yet supported - video codec Radius Cinepak not supported"
    Way forward:
    • Read-1 Please read "cinepak.readme": download, compile, ...
    • Read-2 SuSE: /usr/share/doc/packages/xanim/readme.suse: you may have to download from the net.
    • CONCLUSION: try SuSE 7.1.

    KDE applications -kaddress

    Abandoned.

    Latex

    Core Latex distribution- TexLive

    Two ways to install TeX:
    • via OS (e.g. gnome-software, Synaptics, ...)
    • ‘native’ via tlmgr
    GrayTiger: via the gnome 'software' utility. What have we got? Use Synaptics, search texlive in 'description and name'.

    Legacy

    Install "Tex Live" basic packages via Muom package manager. Files go in /usr/bin, /usr/share and many other locations and subdirs.

    TexMaker

    TexMaker installation

    GrayTiger: via gnome software: Texmaker 5.0.3 (compiled with Qt 5.15.2) Project home site : http://www.xm1math.net/texmaker/

    TexMaker basics

    Legacy: Texworks

    Install "Texworks" packages via Muom package manager. Files go in /usr/bin/texworks etc. Configuration in /home/marc/.config/TUG/TeXworks.conf. Resources in /home/marc/.TeXworks.

    Legacy - Texlipse - Latex on Eclips

    Then install Eclipse, and add Texlipse. Configure as per http://texlipse.sourceforge.net/manual/configuration.html.

    XII.107 Philips TV

    Linking to Philips TV 9604: "http://www.consumer.philips.com/c/televisie/9000-serie-32-inch-1080p-full-hd-digitale-tv-32pfl9604h_12/prd/nl/be/". TV is the "MediaRenderer", supporting according to the Philips Website: MP3, WMA versie 2 tot versie 9.2, Diapresentatiebestanden (.alb), JPEG-afbeeldingen, MPEG1, MPEG2, MPEG4, AVI, H.264/MPEG-4 AVC, MPEG-progammastream PAL, WMV9/VC1. Connectivity is Ethernet-UTP5, USB, WiFi 802.11g (ingebouwd). DLNA 1.0-gecertificeerd.

    Check it on http://www.dlna.org/products/. On this site you can view the DLNA Certificate for every product. For example the 37PFL9604 Certificate can be found here : http://certification.dlna.org/certs/REG57370173.pdf. It supports DLNA 1.0. Useful discussion on http://blog.hznet.nl/2009/06/philips-8000-series-and-dlna-not-really/ Conclusion IMHO: it should be possible to stream lots of different video formats to the TV via DLNA, even through the network interfaces (wireless/ethernet). Best approach may be to find some formats that are realiably supported on the TV and then convert whatever you have to such format by transcoding on the fly.

    XII.108 Mediatomb

    XII.108.1 Basics

    Angkor2, installed Mediatomb version 0.12.1 (via "sudo aptitude install mediatomb") in July 2013. Mediatomb implements the UPnP MediaServer v1.0 specification according to www.upnp.org. Should work with any UPnP MediaRenderer. Url: mediatomb.cc with documentation. After installing you get:
    • /home/marc/.mediatomb directory with config.xml, mediatomb.db and mediatomb.html
    Main config is in .mediatomb/config.xml. Apparently:
    • there is a serverprocess started at boot via "etc/rc2.d/s98mediatomb"
    • still you need to start it from a terminal? - weird
    • you can interact via GUI (by default disabled) or CLI
  • 'newschool': service mediatomb status
  • 'newschool': service mediatomb start (or stop)
  • mediatomb --help shows you the options
  • mediatomb --compile-info will list parameters of the version you are running
  • mediatomb --add /home/marc/Pictures/ --add /home/marc/Videos
  • mediatomb -d (starts mediatomb as a background daemon)
  • then point your browser to /home/marc/.mediatomb/mediatomb.html to start the gui (if enabled) Good basic info in: "https://help.ubuntu.com/community/MediaTomb".

    Usage

    Adding music. Apparently you can add entries via the GUI, or via CLI: GUI Starting Mediatomb from userterminal results in informationlisting, with pointer to GUI, eg: "2009-12-20 17:34:58 INFO: http://192.168.1.5:49152/". In the GUI, use the righthalf screen to navigate the filesystem and add your libraries to the database.

    CLI
    • Adding music: 'mediatomb --add /home/c4/Saad'
    • Adding pictures: 'mediatomb --add /home/c4/Whisky

    Accessing the music over the network. According to the documentation:"MediaTomb should work with any UPnP compliant MediaRenderer". How do you identify the status of the Mediatomb server? When running MediaServer on Sanne's HP laptop, I can navigate the entire Angkor filesystem.... scary. According to the documentation, this is because MediaTomb is to be used in a friendly home setting. For better security: run under a more restricted user account, or simply disable the GUI entirely (I assume you can then still work locally via the CLI).

    Brasero cd burning

    To get it working: 'sudo dbus-launch brasero'

    To write mp4 movies, use a data format.

    mp3 encoding with abcde and lame

    Installation

    • install abcde and helpers (sudo apt-get install abcde cd-discid lame cdparanoia id3 id3v2).
    • adapt the /etc/abcde.conf file as per below so it will support output to mp3.
    • finally, with an audio cd in your drive, invoke as 'abcde -o mp3'. Temp wav files are created, but a directory will be created with the mp3's.
    The program abcde is actually a long script that manipulates a handful of programs, which I have conveniently added into the Terminal command above. It can actually do a great deal more than simply produce reasonable mp3 files but I will leave you to explore its many other possibilities. The programs that will be used to produce mp3s in this example are:

    abcde

    "A Better CD Encoder" = abcde! Ordinarily, the process of grabbing the data off a CD and encoding it, then tagging or commenting it, is very involved. The abcde script is designed to automate this.

    cd-discid

    In order to do CDDB (Compact Disc Database) queries over the Internet, you must know the DiscID of the CD you are querying. cd-discid provides you with that information. It outputs the discid, the number of tracks, the frame offset of all of the tracks, and the total length of the CD in seconds, on one line in a space-delimited format.

    cdparanoia

    cdparanoia retrieves audio tracks from CDROM drives. The data can be saved to a file or directed to standard output in WAV, AIFF, AIFF-C or raw format. For the purposes of conversion to mp3 abcde directs cdparanoia to produce WAV files.

    lame

    LAME is a program which can be used to create MPEG Audio Layer III (MP3) files.

    id3

    id3 is an ID3 v1.1 tag editor. ID3 tags are traditionally put at the end of compressed streamed audio files to denote information about the audio contents.

    id3v2

    id3v2 is an ID3 v2 tag editor. ID3 tags are traditionally put at the end of compressed streamed audio files to denote information about the audio contents. Using this command line software you can add/modifiy/delete id3v2 tags and optionally convert id3v1 tags to id3v2.abcde looks for two files on startup: /etc/abcde.conf and ~/.abcde.conf. The file abcde.conf is a fully commented configuration file that is well worth looking at, if only to copy to your home directory as ~/.abcde.conf (as is most usually done). Or if you are only interested in creating mp3s my gift to you,

    Sample 'abcde.conf' Gentle Reader, is my own ~/.abcde.conf file: ---START OF abcde.conf example file--- #-----------------$HOME/.abcde.conf----------------- # # # A sample configuration file to convert music cds to # MP3 format using abcde version 2.3.99.6 # # http://andrews-corner.org/abcde.html #-------------------------------------------------- # # Specify the encoder to use for MP3. In this case # the alternatives are gogo, bladeenc, l3enc, xingmp3enc, mp3enc. MP3ENCODERSYNTAX=lame # Specify the path to the selected encoder. In most cases the encoder # should be in your $PATH as I illustrate below, otherwise you will # need to specify the full path. For example: /usr/bin/lame LAME=lame # Specify your required encoding options here. Multiple options can # be selected as '--preset standard --another-option' etc. LAMEOPTS='--preset extreme' # Output type for MP3. OUTPUTTYPE="mp3" # The cd ripping program to use. There are a few choices here: cdda2wav, # dagrab, cddafs (Mac OS X only) and flac. CDROMREADERSYNTAX=cdparanoia # Give the location of the ripping program and pass any extra options: CDPARANOIA=cdparanoia CDPARANOIAOPTS="--never-skip=40" # Give the location of the CD identification program: CDDISCID=cd-discid # Give the base location here for the encoded music files. OUTPUTDIR="$HOME/music/" # Decide here how you want the tracks labelled for a standard 'single-artist', # multi-track encode and also for a multi-track, 'various-artist' encode: OUTPUTFORMAT='${OUTPUT}/${ARTISTFILE}-${ALBUMFILE}/${TRACKNUM}.${TRACKFILE}' VAOUTPUTFORMAT='${OUTPUT}/Various-${ALBUMFILE}/${TRACKNUM}.${ARTI STFILE}-${TRACKFILE}' # Decide here how you want the tracks labelled for a standard 'single-artist', # single-track encode and also for a single-track 'various-artist' encode. # (Create a single-track encode with 'abcde -1' from the commandline.) ONETRACKOUTPUTFORMAT='${OUTPUT}/${ARTISTFILE}-${ALBUMFILE}/${ALBUMFILE}' VAONETRACKOUTPUTFORMAT='${OUTPUT}/Various-${ALBUMFILE}/${ALBUMFILE}' # Put spaces in the filenames instead of the more correct underscores: mungefilename () { echo "$@" | sed s,:,-,g | tr / _ | tr -d \'\"\?\[:cntrl:\] } # What extra options? MAXPROCS=2 # Run a few encoders simultaneously PADTRACKS=y # Makes tracks 01 02 not 1 2 EXTRAVERBOSE=y # Useful for debugging EJECTCD=y # Please eject cd when finished :-) ---END OF abcde.conf example file---

    XII.114 ffmpeg/avconv

    XII.114.1 installation

    On Angkor, via Synaptics. Run "ffmpeg" to find out it is an "FFmpeg video convertor". Run "ffmpeg -formats" to see supported formats. Both ogg and mp3 seem to be present.

    XII.114.2 Usage

    "man ffmpeg" has sample commands at the end. For ogg to mp3: "ffmpeg -i file.ogg file.mp3". For ape to mp3: "ffmpeg -i file.ape file.mp3". Using in on 29 Aug 2013 resulted in: “This program is deprecated, please use avconv instead". This was already installed, but failed to do the conversion, complaining about codec missing. Although running “avconv -formats” indicates it does support mp3. So what is wrong then? Some form of encryption?

    XII.115 xournal - pdf

    Pdf annotator.

    XII.116 pdftk

    The Swiss armyknife for pdfs. www.pdflabs.com Sample command to create a pdf that prevents text copying:
    • pdftk "03b - Productie_mgmt_processen_en_BOM.pdf" output "03b - Productie_mgmt_processen_en_BOM.mls.pdf" owner_pw Tx9Az7 allow printing
    • pdftk "03b - Productie_mgmt_processen_en_BOM.pdf" output "03b - Productie_mgmt_processen_en_BOM.mls.pdf" owner_pw Tx9Az7 encrypt_128bit allow Printing ModifyAnnotations

    XII.116B PDF-Over - pdf signer by A-SIT

    Downloaded from Joinup. Start installation 'marc@GrayTiger:~/.jdks/openjdk-18.0.2/bin$ ./java -jar /home/marc/Downloads/PDF-Over-4.4.1.jar' Installed in /home/marc/PDF-Over. Expects mobile/smartcard from Austria (CCE) - https://joinup.ec.europa.eu/collection/eidentity-and-esignature/solution/citizen-card-encrypted-cce/about

    XII.117 MP3 Sansa player (Sandisk)

    Basics

    Problem: resetting

    PROBLEM/SOLUTION from "http://forums.sandisk.com/": if the player is not turning on or charging there may be a fix. there was a know issue where the battery can become overly discharged. the fix is to hold the power button down for 30 seconds (this resets the player) then plug it in and let it charge. the LCD should turn on and it will start charging with in an hour of being plugged in. typically 15 to 20 mins depending on how much power the charging source gives. Once the LCD turn on and the device starts charging let it charge for about 3 hours for a full charge.

    This issue is suppose to be fixed with the latest version of firmware. After it gets done charging make sure you update to the latest firmware. there is a frimware thread at the top of this board with the download links and installation instructions.

    MLS: other posters on the forum report problems updating firmware... so beware.

    Microsoft Teams

    Deinstalled, use via browser: https://www.microsoft.com/microsoft-teams/join-a-meeting or https://www.microsoft.com/nl-be/microsoft-teams/join-a-meeting.

    Basics/license/account

    There is:
    • Enterprise license, ranging from free to expensive,
    • Personal license, idem
    • You can join a Teams meeting as guest, without any license or account.
    For Enterprise or Personal license, you need to 'register'. I assume
    • I have my personal Microsoft account - which does not seem to be involved - this account may or may not be linked to 'work' or 'school' account.
    • Using RHUL Teams, I'm using my RHUL account, starting Teams from a browser session (e.g. after logging on to email) shows the 'live' account in the url
      • Once in RHUL Teams I can deduce I have a 'work' account (because I can switch to 'Teams for personal use' account, and doing so asks me to sign out of my 'work' account.
      • 'Your work or school account belongs to your home organization. You can not leave your home organization.'
      • You may be a member of additional organisations. The organisation's administrator needs to invite you. You can leave additional organisations. To rejoin you'll need an invitation again.
    • When invited by another entity, they invite an email address... which may link to an account?

    Basics/client-side software

    Under my RHUL live account, I can see I have in the past connected to Teams using Linux/Firefox and:
    • Microsoft Teams Web Client
    • Office 365
    • Office 365 Exchange Online
    Join via browser: https://www.microsoft.com/microsoft-teams/join-a-meeting or https://www.microsoft.com/nl-be/microsoft-teams/join-a-meeting

    On GrayTiger

    Installation I downloaded a Teams client into /Downloads/teams_1.4.00.26453_amd64.deb. Starting it shows 'This version of Teams only supports work or school accounts managed by an organisation.' If you press the 'switch' button, you are asked to login at RHUL.

    See /home/marc/.config/Microsoft and teams.

    Uses Gnome autostart. You can find the start script by cat:ing it: cat $(which teams). Stop autostart: https://askubuntu.com/questions/1254225/how-to-disable-auto-startup-of-microsoft-teams-in-ubuntu - no great help. However: if you use gnome-tweaks you find 'startup applications' settings, and there Teams was installed. Remove it here and the autostart is gone. Removal There are:
    • Directory '/home/marc/.config/Microsoft' with lots of subdirs and files
    • The deb file in '/Downloads/teams_1.4.00.26453_amd64.deb'
    • The start script in /usr/bin/teams
    • Files in /usr/share
    I did 'sudo apt remove teams', that seems to remove stuff...

    XII.119 Slack

    Basics

    Installed in '/home/marc/Downloads/usr'. Usage: start in terminal ('/home/marc/Downloads/usr/bin/slack'.