Jelle's notes
A collection of public notes on various topics generated using mdbook
Reproducible Builds
- Python issues due to tests?: https://reproducible.archlinux.org/api/v0/builds/342940/diffoscope
- Java jar generation in libs
Java JAR
Arch sphinx issue
Potential fix https://gitlab.archlinux.org/archlinux/packaging/packages/pgadmin4/-/commit/29801f1125a315cb0f54e186619b7cba3cfe6112
Alternative:
/usr/share/makepkg/reproducible/python.sh
[jelle@t14s][~/projects/reproducible-website]%pacman -F environment.pickle
extra/alice-vision 2.4.0-18
usr/share/doc/aliceVision/htmlDoc/.doctrees/environment.pickle
extra/dleyna-docs 0.8.2-2
usr/share/doc/dleyna/.doctrees/environment.pickle
extra/ghc-static 9.0.2-3
usr/share/doc/ghc/html/haddock/.build-html/.doctrees/environment.pickle
usr/share/doc/ghc/html/haddock/.doctrees/environment.pickle
extra/libcamera-docs 0.1.0-2
usr/share/doc/libcamera/html/.doctrees/environment.pickle
extra/python-awkward-docs 1.10.2-2
usr/share/doc/python-awkward/.doctrees/environment.pickle
extra/python-uproot-docs 4.3.5-4
usr/share/doc/python-uproot/.doctrees/environment.pickle
extra/python-websockets 10.4-3 [installed: 12.0-1]
usr/share/doc/python-websockets/.doctrees/environment.pickle
Fedora
reproducing script https://github.com/keszybz/fedora-repro-build
https://github.com/rpm-software-management/mock/issues/692 - clamp timestamps https://github.com/rpm-software-management/rpm/pull/1532 - build info file
- try to reproduce cockpit with mockbuild
https://github.com/fepitre/rpmreproduce
flatpak
https://fedoramagazine.org/an-introduction-to-fedora-flatpaks/ https://blogs.gnome.org/mclasen/2018/07/07/flatpak-making-contribution-easy/ https://ranfdev.com/blog/flatpak-builds-are-not-reproducible/ https://github.com/flatpak/flatpak-builder/issues/251 https://gitlab.com/freedesktop-sdk/freedesktop-sdk/-/issues/1320
- diffoscope support?
- CI on flathub repositories?
- reproducing
Diffing a flatpak
For Cockpit, comparing the build dir output
flatpak-builder --disable-cache --disable-rofiles-fuse --force-clean flatpak-build-dir1 org.cockpit_project.CockpitClient.yml
flatpak-builder --disable-cache --disable-rofiles-fuse --force-clean flatpak-build-dir2 org.cockpit_project.CockpitClient.yml
diffoscope flatpak-build-dir1 flatpak-build-dir2
Comparing using two repos:
flatpak-builder --repo=repo1 --disable-cache --disable-rofiles-fuse --force-clean flatpak-build-dir org.cockpit_project.CockpitClient.yml
flatpak-builder --repo=repo2 --disable-cache --disable-rofiles-fuse --force-clean flatpak-build-dir org.cockpit_project.CockpitClient.yml
Get the refs from ostree:
ostree refs --repo=repo1
ostree show --repo=repo1 runtime/org.cockpit_project.CockpitClient.Debug/x86_64/devel
ostree show --repo=repo2 runtime/org.cockpit_project.CockpitClient.Debug/x86_64/devel
Confirm the ContentChecksum
is the same.
live iso
Reproducible live iso
Issues
- libopensmtpd - mandoc has a "$Mdocdate$" variable which does not respect SOURCE_DATE_EPOCH
- hugin - gzip timestamps
- pcp - gzip timestamp
- libkolabxml XML ordering https://git.kolab.org/T2642 https://bugzilla.opensuse.org/show_bug.cgi?id=1060506 try to set XERCES_DEBUG_SORT_GRAMMAR, but that needs to be in xerces-c which is kinda untested and dumb
- mm-common
- musescore https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope-results/musescore3.html
- openpmix PMIX_CONFIGURE_HOST
- perl-crypt-random-tesha2 don't advertise entropy
- ssr records $USER and $date
- libgtop records uname
- openxr script is not reproducible.
- php phar timestamps
- namazu records $(hostname)
- dosemu timestamps
- echoping hostname
- python-lxml-docs timestamp in "Generated On"
-
ant-doc javadoc adds timestamp to documentation.
Generated by javadoc (14.0.2) on Sun Nov 15 16:33:44 UTC 2020
- emelfm2 kernel + timestamp
- libiio timestamp
- gajim man pages (gzip) and pyc bytecode
- fs-uae zip file not ordered? permission? zip issues?!
- gutenprint uname/ timestamp recording
- libmp4v2 timestamp
- gdk-pixbuf2-docs order issue in generated documentation
- ghostpcl timestamp
- libgxps timestamp
- netcdf & netcdf-fortran uname
- nethack build date
- python-lxml timestamp in generated docs
- qastools gzip timestamp (https://gitlab.com/sebholt/qastools/)
- qtikz sqlite database with datetime difference in TimeStampTable
- rmlint - gzip timestamp and timestamp in rmlint
- glhack - timestamp
- glob2 - timestamp
- docker - timestamp
- radamsa - needs a rebuild
- eq10q - needs a rebuild
- harvid needs a rebuild due to size issues with an older makepkg version (fails to build)
- colord binary seems to embed the profile data as a random hash?
- tbb timestamp, build host and build kernel
- ruby-colorize timestamp in gemspec
- rebuild ruby-* packages which do not remove "$pkgdir/$_gemdir/gems/$_gemname-$pkgver/ext" as it contains non-reproducible files.
- i7z - gzip timestamp
- openmpi - records hostname
- v2ray-domain-list-community - geosite.dat not ordered
- unrealircd - timestamp in binary
- libcec - hostname/timestamp
- hevea - ocaml build /tmp/$tmp path differs https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=786913
- mari0 - zip file
- arj - date https://reproducible.archlinux.org/api/v0/builds/118386/diffoscope
- ibus - date
- argyllcms - (date) - https://www.freelists.org/list/argyllcms send email about created date containing hours/minutes/second and SOURCE_DATE_EPOCH
- dd_rescue - man page gz timestamp => mail maintainer https://sourceforge.net/p/ddrescue/tickets/
- deepin-wallpapers => most likely order issue with the wildcard in the makefile nope, most likely image-blur is not reproducible
openexr reproducer
python specification/scripts/genxr.py -registry specification/registry/xr.xml -o /home/jelle/projects/OpenXR-SDK-Source/build/include/openxr/ openxr_reflection.h
Man page gzip timestamp issue
Fixing all the gzip timestamp issue packages is a lot of work and patching
upstream everywhere is not really doable. An idea might be to detect gzip files which are non-reproducible and let a makepkg option like zipman
or extend zipman
to take care of this.
touch foo
gzip foo
file bar.gz | grep modified &>/dev/null && gunzip -c bar.gz | gzip -9 -n -c > test.gz
Haskell packages
Try to build them without !strip and then compare the packages.
https://gitlab.haskell.org/ghc/ghc/-/wikis/deterministic-builds https://gitlab.haskell.org/ghc/ghc/-/issues/12935
Ideas
- Year blog post
- Documentation about reproducible builds in the packager wiki / packaging wiki
Package pacman in Debian
-> sudo pbuilder create
-> sudo cowbuilder create
-> sudo gbp buildpackage --git-ignore-new --git-pbuilder -nc
rebuilderd-website
- Improve loading performance
- add make install target
Python issues
For pyc differences PYTHONHASHSEED can be set to a fixed value to try and circumvent the random hash initialisation getting embedded in pyc files
For test files being show in the diffoscope results as pyc files and not in the rebuild package the issue is probably that pyc files generated by running tests are installed errorsnly. Exporting PYTHONDONTWRITEBYTECODE=1 when running the tests.
Rebuilderd
Rebuilderd doesn't clean up old builds, to remove all builds which are no longer references to a package:
delete from builds where id not in (select build_id from packages where build_id is not null);
Rebuilderd also stores logs for succeeded builds which isn't required.
Requeue'ing bad builds can be done as following:
rebuildctl pkgs requeue --suite core --status BAD
Improvements
- add build date to output of
rebuildctl pkgs ls --status BAD --suite core
- add build date to the /log output
- add build host to the /log output (so one can identify if a host has a bad build env)
- add a cleanup thread that runs occasionally cleaning up old rebuild results.
Autoclassify script
Make an autoclassify script based on the diffoscope html output stored in rebuilderd. Maybe using the rebuilderd database for now => extract the diffoscope html and inspiration drawn from this script
Twitter bot
Twitter bot for notifications about reproducible builds in IRC and allowing tweets from irc.
- gazouilleur was used but requires mongodb, any alternatives?
- twitter irc bot form nerdhaus
Recipes
Quiche
- bacon strips
- broccoli
- champignons
- rasped cheese
- 4 eggs
- 200ml cooking cream
Pancakes
- 300 gram flour
- 1 teaspoon salt
- 2 eggs
- 500 ml milk
- 30 gram butter
Practice
- Songs
- Music Theory
- Scales
Songs
- About a girl - Nirvana
- Johnny b goode
- Plush - Stone Temple Pilots
- Purple Haze
- Neutral Milk Hotel - In The Aeroplane Over The Sea
- Can't explain - The Who
- Heart of Gold
- Wolfmother - Woman
Holy Ghost Fire riff
E|--------------------------------------|
B|--------------------------------------|
G|--------------------------------------|
D|-----------5-------------0------------|
A|------5_7~---7-5-----0^2---5---5^7~---|
E|-3^0-------------7_3---------7--------|
Picking
https://www.soundslice.com/slices/7jHcc/
Scales
Minor Pentatonic Scale
E|---------------------5-8-------------|
B|-----------------5-8-----------------|
G|-------------5-7---------------------|
D|---------5-7-------------------------|
A|-----5-7-----------------------------|
E|-5-8---------------------------------|
Major scale
e|---------------------------4-5-|
B|-----------------------5-7-----|
G|-----------------4-6-7---------|
D|-----------4-6-7---------------|
A|-----4-5-7---------------------|
E|-5-7---------------------------|
Minor Scale
E|-----------------------------5-7-8-|
B|-----------------------5-6-8-------|
G|-----------------4-5-7-------------|
D|-------------5-7-------------------|
A|-------5-7-8-----------------------|
E|-5-7-8-----------------------------|
Theory
Chords, Progressions & Keys Triads Fretboard Chords of a key Chord Theory
Chords of a key
G Major scale
G A B C D E F#
The 4th note is a half step and the 7th note is half step.
Learning Ardour
- ardour 6 quickstart
- How to monitor my recording tracks properly
- How to make mono recording stereo
- Learn recording hotkeys in ardour
Hedgedoc
- Style frontpage
Configuration
/etc/webapps/hedgedoc/config.json
{
"production": {
"sessionSecret": "laPah7ohSheeroo4yep5shi7ioghie",
"email": false,
"domain": "archtest.lxd",
"loglevel": "debug",
"protocolUseSSL": true,
"allowAnonymous": false,
"hsts": {
"enable": true,
"maxAgeSeconds": 31536000,
"includeSubdomains": true,
"preload": true
},
"csp": {
"enable": true,
"directives": {
},
"upgradeInsecureRequests": "true",
"addDefaults": true,
"addDisqus": false,
"addGoogleAnalytics": false
},
"cookiePolicy": "lax",
"db": {
"dialect": "sqlite",
"storage": "/var/lib/hedgedoc/db.hedgedoc.sqlite"
},
"linkifyHeaderStyle": "gfm"
}
}
/etc/webapps/hedgedoc/sequelizerc
var path = require('path');
module.exports = {
'config': path.resolve('config.json'),
'migrations-path': path.resolve('lib', 'migrations'),
'models-path': path.resolve('lib', 'models'),
'url': 'sqlite:///var/lib/hedgedoc/db.hedgedoc.sqlite'
}
Nginx
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /socket.io/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
Keycloak
systemctl hedgedoc service override
CMD_OAUTH2_USER_PROFILE_URL=https://archkeycloak.lxd/auth/realms/archlinux/protocol/openid-connect/userinfo
CMD_OAUTH2_USER_PROFILE_USERNAME_ATTR=preferred_username
CMD_OAUTH2_USER_PROFILE_DISPLAY_NAME_ATTR=name
CMD_OAUTH2_USER_PROFILE_EMAIL_ATTR=email
CMD_OAUTH2_TOKEN_URL=https://archkeycloak.lxd/auth/realms/archlinux/protocol/openid-connect/token
CMD_OAUTH2_AUTHORIZATION_URL=https://archkeycloak.lxd/auth/realms/archlinux/protocol/openid-connect/auth
CMD_OAUTH2_CLIENT_ID=hedgedoc
CMD_OAUTH2_CLIENT_SECRET=23829d32-e820-4d03-8c5d-7a6b996daec0
CMD_OAUTH2_PROVIDERNAME=Keycloak
CMD_DOMAIN=archtest.lxd
CMD_PROTOCOL_USESSL=true
CMD_URL_ADDPORT=false
golang
Project ideas
- golang dns client using RDAP with json output
Start a project
go mod init github.com/jelly/$project
Common modules
- cobra
- logrus
Types
- slice []string{"lala", "lolol"};
- string
- bool
Gotchas
Go executes init
functions automatically at program startup, after global variables have been initialized.
Type assertions
var greeting interface{} = "hello world"
greetingStr, ok := greeting.(string)
if !ok {
fmt.Println("not asserted")
}
Type asertions can only take place on interfaces, on our first line we assign a string to the interface greeting
. While greeting
is a string now, the interface exposed to us is a string. To return the original type of greeting
we can assert that it is a string using greeting.(string)
.
If you are not sure of the type of an interface a switch can be used:
var greeting interface{} = 42
switch g := greeting.(type) {
case string:
fmt.Println("string of length", len(g))
case int:
fmt.Println("integer of value", g)
case default:
fmt.Println("no idea what g is")
}
This all is called an assertion, as the original type of greeting
(interface) is not changed.
Type conversions
var greeting := []byte("hello world")
greetingStr := string(greeting)
In Golang a type defines:
- How the variable is stored (underlying data structure)
- What you can do wit hthe variable (methods/ functions it can be used in)
In Golang one can define it's own type
// myInt is a new type who's base type is `int`
type myInt int
// The AddOne method works on `myInt` types, but not regular `int`s
func (i myInt) AddOne() myInt { return i + 1}
func main() {
var i myInt = 4
fmt.Println(i.AddOne())
}
As a myInt uses a similiar data structure underneath, we can convert a myInt to an int.
var i myInt = 4
originalInt := int(i)
This means, types can only be converted if the underlying data structure is the same.
declaring variables
There are two ways to declare variables in golang (Go infers the type from initiailization)
- using the var keyword
var int foo = 4
- using a short declaration operator (:=)
foo := 4
Differences:
var keyword:
- used to declare and initialize the variables inside and outside of functions
- the scope can therefore be package level or global level scope or local scope
- declaration and initialization of the variables can be done separately
- optionally can put a type with the decleration
short decleration operator:
- used to declare and initialize the variable only inside the functions
- variables has only local scope as they can only be declared in functions
- decleration and initialization of the variables must be done at the same time
- there is no need to put a type
struct
Named structs
type Employee struct {
firstName string
lastName string
age int
}
func main() {
emp1 := Employee{
firstName: "Sam",
lastName: "Anderson",
age: 25,
}
// Zero value of a struct, all fields with be 0 or ""
var emp2 Employee
}
Anonymous struct
foo := struct {
firstName string,
lastName string,
}{
firstName: "Steve",
lastName: "Jobs",
}
Pointers to a struct
emp1 := &Employee{
firstName: "Steve",
lastName: "Jobs",
}
fmt.Println("First Name:", (*emp1).firstName);
fmt.Println("First Name:", emp1.firstName);
Anonymous fields
It is possible to create structs with fields that contain only a type without the field name. Even thought they have no explicit name, by default the name of an anonymous field is the name of its type.
type Person struct {
string
int
}
Nested structs
type Address struct {
city string
state string
}
type Person struct {
name string
age int
address Address
}
Promoted Fields
Fields that belong to an anonymous struct field in a struct are called promoted fields since they can be accessed as if they belong to the struct which holds the anonymous struct field.
type Address struct {
city string
state string
}
type Person struct {
name string
age int
Address
}
func main() {
p := Person{
name: "Naveen",
age: 50,
Address: Address{
city: "Chicago",
state: "Illinois",
},
}
fmt.Println("Name:", p.name)
fmt.Println("Age:", p.age)
fmt.Println("City:", p.city) //city is promoted field
fmt.Println("State:", p.state) //state is promoted field
}
Structs equality
Structs are value types and are comparable if each of their fields are comparable. Two struct variables are considered equal if their corresponding fields are equal.
Interface
An interface is a set of methods and a type.
For structs
An interface is a placeholder for a struct which implements it's functions, which can be used to allow a a method to take an interface as argument.
package main
import (
"fmt"
"math"
)
type geometry interface {
area() float64
perim() float64
}
type rect struct {
width, height float64
}
func (r rect) area() float64 {
return r.width * r.height
}
func (r rect) perim() float64 {
return 2*r.width + 2*r.height
}
type circle struct {
radius float64
}
func (c circle) area() float64 {
return math.Pi * c.radius * c.radius
}
func (c circle) perim() float64 {
return 2 * math.Pi * c.radius
}
func measure(g geometry) {
fmt.Println(g)
fmt.Println(g.area())
fmt.Println(g.perim())
}
func main() {
r := rect{width: 3, height: 4}
c := circle{radius: 5}
measure(r)
measure(c)
}
The interface{} type
The interface{} type, the empty interface has no methods. This means that any function which takes interface{}
value as parameter, you can supply that function with any value.
package main
import (
"fmt"
)
func checkType(i interface{}) {
switch i.(type) { // the switch uses the type of the interface
case int:
fmt.Println("Int")
case string:
fmt.Println("String")
default:
fmt.Println("Other")
}
}
func main() {
var i interface{} = "A string"
checkType(i) // String
}
Equality of interface values
An interface is equal if they are both nil or the underlying value and the type are equal.
package main
import (
"fmt"
)
func isEqual(i interface{}, j interface{}) {
if(i == j) {
fmt.Println("Equal")
} else {
fmt.Println("Inequal")
}
}
func main() {
var i interface{}
var j interface{}
isEqual(i, j) // Equal
var a interface{} = "A string"
var b interface{} = "A string"
isEqual(a, b) // Equal
}
goroutines
modules?
context
Security checklist
Checklists from certifiedsecure.com
Server configuration checklist
Mark result with ✓ or ✗
# | Certified Secure Server Configuration Checklist | Result | Ref | |
---|---|---|---|---|
1.0 | Generic | |||
1.1 | Always adhere to the principle of least privilege | |||
2.0 | Version Management | |||
2.1 | Install security updates for all software | |||
2.2 | Never install unsupported or end-of-life software | |||
2.3 | Install software from a trusted and secure repository | |||
2.4 | Verify the integrity of software before installation | |||
2.5 | Configure an automatic update policy for security updates | |||
3.0 | Network Security | |||
3.1 | Disable all extraneous services | |||
3.2 | Disable all extraneous ICMP functionality | |||
3.3 | Disable all extraneous network protocols | |||
3.4 | Install a firewall with a default deny policy | |||
3.5 | Firewall both incoming and outgoing connections | |||
3.6 | Disable IP forwarding and routing unless explicitly required | |||
3.7 | Separate servers with public services from the internal network | |||
3.8 | Remove all dangling DNS records | |||
3.9 | Enable DNS record signing | |||
4.0 | Authentication and Authorization | |||
4.1 | Configure authentication for access to single user mode | |||
4.2 | Configure mandatory authentication for all non-public services | |||
4.3 | Configure mandatory authorization for all non-public services | |||
4.4 | Configure mandatory authentication for all users | |||
4.5 | Enforce the usage of strong passwords | |||
4.6 | Remove all default, test, guest and obsolete accounts | |||
4.7 | Configure rate limiting for all authentication functionality | |||
4.8 | Disable remote login for administrator accounts | |||
4.9 | Never implement authorization based solely on IP address | |||
5.0 | Privacy and Confidentiality | |||
5.1 | Configure services to disclose a minimal amount of information | |||
5.2 | Transmit sensitive information via secure connections | |||
5.3 | Deny access to sensitive information via insecure connections | |||
5.4 | Store sensitive information on encrypted storage | |||
5.5 | Never use untrusted or expired SSL certificates | |||
5.6 | Configure SSL/TLS to accept only strong keys, ciphers and protocols | |||
5.7 | Configure an accurate and restrictive CAA DNS record | |||
5.8 | Use only widely accepted and proven cryptographic primitives | |||
5.9 | Use existing, well-tested implementations of cryptographic primitives | |||
5.10 | Separate test, development, acceptance and production systems | |||
5.11 | Never allow public access to test, development and acceptance systems | |||
5.12 | Never store production data on non-production systems | |||
5.13 | Configure a secure default for file permissions | |||
5.14 | Configure file permissions as restrictive as possible | |||
5.15 | Disable the indexing of files with sensitive information | |||
5.16 | Configure automated removal of temporary files | |||
6.0 | Logging Facilities | |||
6.1 | Restrict access to logging information | |||
6.2 | Configure logging for all relevant services | |||
6.3 | Configure logging for all authentication and authorization failures | |||
6.4 | Configure remote logging for all security related events | |||
6.5 | Routinely monitor and view the logs | |||
6.6 | Never log sensitive information, passwords or authorization tokens | |||
7.0 | Service Specific | |||
7.1 | Complete the Secure Development Checklist for Web Applications | |||
7.2 | Disable open relaying for mail services | |||
7.3 | Disable email address enumeration for mail services | |||
7.4 | Disable anonymous uploading for FTP services | |||
7.5 | Disable unauthorized AXFR transfers in the DNS | |||
8.0 | Miscellaneous | |||
8.1 | Configure rate limiting for all resource-intensive functionality | |||
8.2 | Prevent unintended denial of service when configuring rate limiting | |||
8.3 | Check configuration of all services for service-specific issues | |||
8.4 | Check for and mitigate server- or setup-specific problems |
Tools
- mdcat cat for markdown
- httpie HTTP client
- taskell CLI kanboard
- oxipng PNG optimizer written in Rust
- mdbook command line tool to create books using Markdown
- diffoscope diff on steroids
- fzf fuzzy finder
- tmux
- inotify-tools
- tig
Releasing
Benchmarking
- oha http load benchmark tool
Load test with 50 requests/second for 2 minutes
oha https://example.org -q 50 -z 2m
- procpath memory profiling
Development
inotifywait
npm run watch
while true; do inotifywait -r dist | while read r; do scp dist/* c:/usr/share/cockpit/certificates/; done; done
Certificates / CA
step-cli certificate create root-ca root-ca.crt root-ca.key --profile root-ca
step certificate install root-ca.crt
# General client cert
step-cli certificate create $(hostname -f) server.crt server.key --san $(hostname -f) --san $(hostname -s) --profile leaf --ca ./root-ca.crt --ca-key ./root-ca.key --no-password --insecure --not-after "$(date --date "next year" -Iseconds)"
Docs
- tldr - cheatsheets for cli tools
General vim tricks
- calculations: in insert mode, press
C-r =
then insert your calculation - resizing panes
Ctrl+w +
andCtrl + -
Required packages
- fzf - fzf plugin
- the_silver_searcher -searching in files for the fzf plugin
- cargo / rust - rust LSP integration
- pyright - Python LSP integration
Plugins
- fugitive git plugin for vim
- vim-wiki wiki plugin for vim
- vim-gitgutter shows gif diff markers in the sign column
- vim-commentary comment out stuff
Vim-wiki bindings
Publishing my notes:
nnoremap <F1> :terminal make serve<CR>
nnoremap <F2> :!make rsync_upload<CR>
nnoremap <F3> :!make commit_push<CR>
binding | action |
---|---|
F1 | execute mdbook serve |
F2 | publish to notes.vdwaa.nl |
F3 | git commit and push |
Standard vimwiki bindings:
binding | action |
---|---|
<C-Space> | toggle listitem on/off |
gl* | make the item before the cursor a list |
<Tab> | (insert mode) go next/create cell |
+ | create/decorate links |
vimwiki diary
binding | action |
---|---|
go to diary index | |
create a new diary entry | |
:VimwikiDiaryGenerateLinks | update diary index |
Fugitive bindings
binding | action |
---|---|
<space>ga | git add |
<space>gs | git status |
<space>gc | git commit |
<space>gt | git commit (full path) |
<space>gd | git diff (:Gdiff) |
<space>ge | git edit (:Gedit) |
<space>gr | git read (:Gread) |
<space>gw | git write (:Gwrite) |
<space>gl | git log |
<space>gp | git grep |
<space>gm | git move |
<space>gb | git branch |
<space>go | git checkout |
<space>gps | git push |
<space>gpl | git pull |
Ale bindings
binding | action |
---|---|
gd | Go to definition |
gr | Go to references |
gs | Symbol search |
K | Display function/type info |
gR | Rename variable/function |
Commentary bindings
binding | action |
---|---|
gcc | comment out a line (takes a count) |
gcap | comment out a paragaph |
PKGBUILD
binding | action |
---|---|
F1 | bump pkgrel |
F2 | run updpkgsums |
Rust
binding | action |
---|---|
<Leader>b | cargo test |
<Leader>c | cargo clippy |
<Leader>x | cargo run |
<Leader>d | set break point |
<Leader>r | run debugger |
F5 | start debugger |
C-b | compile rust |
FZF
binding | action |
---|---|
F | Search all files |
<space>gf | Git Files |
<space>ff | Search in files using the_silver_searcher |
<space>ss | List all snippets |
Wishlist
- Git integration
- Snippets
- Debugging
- Language features: completion, find function definitions
GDB shortcuts
command | description |
---|---|
continue | continue execution normally |
finish | continue executing until function returns |
step | execute next line of source code |
next | execute next line of source code, without descending into functions |
Providing args:
gdb --args python example.py
Or in the gdb shell
args --config foo.toml
Printing variables:
print filename
print config.interval
Investigate
- coverage plugin for Python
- dotfiles
- more dotfiles
- easier way to quit a terminal like !make serve
- vim-spell how do I add words to my known good words list and cycle through misspelled words
- vim-spell enabled for git commit messages, with a good well known word list?
- custom help file with my missed keybindings
- vim-ale codeactions for rust-analyzer don't work maybe the issue should work like preview rust-analyzers actions
- run tests from vim with vim-test
Neovim setup
Goals
LSP
For the LSP use neovim's native LSP server and neovim/nvim-lspconfig
for configuration. Use :LspInfo
to verify a language server is available and works for the file you are editing.
null-ls => :NullLsInfo
Linting
Completor
- Git integration =>
tpope/vim-fugitive
- Searching files => telescope
- Smart commentor
TODO
- https://github.com/numToStr/Comment.nvim
- https://github.com/nvim-treesitter/nvim-treesitter-context
- cmp (completor)
- lsif
Resources
- debugging rust
- practical vim
x86 tablet
Notes about using Arch / Gnome on an x86 tablet
To Do
- Disable the broken webcam driver (atom-isp2) in the Arch kernel
- No way to copypaste from osd/applications
- No window controls with fingers in gnome
- Loading gnome is a big slow, ~ 10-15 seconds (I/O?)
- Try out the phosh compositor
- Speakers emit a loud beep after a while, when playing a video (in firefox/chromium on npostart.nl or kodi)
- Landscape mode does not work in gnome / panel => iio-sensor-proxy (add to gnome group?)
- Hardware video decoding (mpv) (6263a231b3edabe651c64ab55be2a429b717ac9a in dotfiles)
- Firefox does not support one finger scrolling, chromium does issue
- Get bluetooth working, BCM4343A0.hcd this firmware
Gnome
- intel-media-driver for hardware video acceleration
- sof-firmware for audio
- caribou? Or onboard for OSD keyboard
- iio-sensor-proxy for screen orientation
Firefox one finger scrolling
cp /usr/share/applications/firefox.desktop ~/.local/share/applications/
vim ~/.local/share/applications/firefox.desktop
find the Exec line in the [Desktop Entry] section and change it to
Exec=env MOZ_USE_XINPUT2=1 /usr/lib/firefox/firefox %u
Apps
- firefox does not support PWA's..
- twitch => browser / kodi addon
- youtube => export subscriptions as RSS feed (google takeout) https://www.youtube.com/feeds/videos.xml?channel_id=
- npo.nl => browser
- ziggo.tv => browser
- video => kodi
Problem with docked mode not responding
evtest
Mar 12 20:35:49 surfacego phosh[1077]: Tablet mode disabled
How to join Twitch IRC w/ WeeChat
WeeChat terminal IRC client
- https://weechat.org
gen token
- acccess to "OAuth Password Generator"; semi-official service
- https://twitchapps.com/tmi/
- http://help.twitch.tv/customer/portal/articles/1302780-twitch-irc
- push "Connect to Twitch"
- copy oauth key
- include "oauth:"
oauth:***
https://twitchapps.com/tmi/#access_token=***&scope=chat_login
reset/revoke
you must be keep "Twitch Chat OAuth Token Generator" connection
- http://www.twitch.tv/settings/connections
if you push "Disconnect", so IRC connection unavailable; you have to need re-generate new oAuth key for join IRC
add server
replace TWITCH_NAME to your lowercase Twitch Name
/server add twitch irc.twitch.tv/6667 -password=oauth:*** -nicks=TWITCH_NAME -username=TWITCH_NAME
https://www.reddit.com/r/Twitch/comments/2uqews/anybody_here_using_weechat/
connect and join
/connect twitch
/join #CHANNEL_NAME
save settings
write settings to files
/save
exit/close
exit channel
/part #CHANNEL_NAME
close WeeChat
/quit
buffer
below commands/key very convenience when join 2 or more channels
/buffer list
move buffer-ring
Ctrl + n , Ctrl + p
close buffer
push Tab completion BUFFER_NAME
/buffer close BUFFER_NAME
window split
vertical and horizontal split
/window splitv
/window splith
move window
F7 , F8
undo split
/window merge
set membership (optional)
use for normal IRC client; get user list et al.
/set irc.server.twitch.command "/quote CAP REQ :twitch.tv/membership"
http://fogelholk.io/twitch-irc-joinsparts-with-weechat/ https://ter0.net/enable-userlist-in-weechat-for-twitch-tv-irc/
Linux research
namespaces
A namespace (NS) "wraps" some global system resource to provide isolation. Linux now supports multiple NS types, see namespaces(7):
namespace | desc | flag |
---|---|---|
Mount NS | isolate mount point list | CLONE_NEWNS |
UTS NS | isolate system identifiers (hostname / NIS domain name | CLONE_NEWUTS |
IPC NS | isolate system V IPC & POSIX MQ object | CLONE_NEWIPC |
PID NS | isolate PID number space | CLONE_NEWPID |
Network NS | isolate network resources (network device, stack, ports | CLONE_NEWNET |
User NS | isolate user ID and group ID number spaces | CLONE_NEWUSER |
Cgroup NS | virtualize (isolate) certain cgroup pathnames | CLONE_NEWCGROUP |
Time NS | isolate boot and monotonic clocks | CLONE_NEWTIME |
For each NS:
- Multiple instances of a NS may exist on the system
- At system boot, there is only one instance of each NS type (the initial namespace)
- A process resides in one NS instance (of each NS)
- A process inside NS instance only sees that instance type
Example UTS namespace, isolate two identifiers returned by uname(2):
- nodename, (hostname) sethostname(2)
- domainname, NIS domain name setdomainname(2)
Each UTS NS instance has it's own nodename and domainname
Each process has symlink files in /proc/PID/ns
for every namespace for example /proc/PID/ns/time
, the content can be read with readlink
and has the form: ns-type: [magic-incode-#]
.
Namespaces API
Syscalls for NS:
- clone(2) - create new (child) process in a new NS(s)
- unshare(2) - create new NS(s) and move caller into it/them
- setns(2) - move claling process to another (existing) NS instance
There are shell commands as well (from util-linux):
- unshare(1) - create new NS and execute command in the NS(s)
- nsenter(1) - enter existing NS and execute a command
Creating a new user namespace requires no privileges but all other namespaces required CAP_SYS_ADMIN
privileges. Example:
$ sudo unshare -u bash
# hostname foobar
# hostname
foobar
User namespaces
Allow per namespace mappings of UIDs and GIDs processes, process's UIDs and GIDs inside NS may be different from outside NS. Process might have uid 0 inside the NS and nonzero UID outside. User NSs have a hierarchical relationship, parent of a user NS === user Ns of process that created this user NS. Parential relationship determines some rules about how capabilities work. When a new user NS is created, the first process in the NS has all capabilities that process has power of superuser only inside the user NS.
After creating a user NS defining a UID & GID mapping is done by writing to two files, /proc/PID/{uid_map,gid_map}
. Records written to the map form ID-inside-ns ID-outside-ns length
, ID-inside-ns and length define the range of IDs inside the user NS that are to be mapped. ID-outside-ns defines start of corresponding mapped range in "outside" user NS.
Example:
$ id
uid=1000(jelle)
$ unshare -U -r bash
usrns$ cat /proc/$$/uid_map
0 1000 1
usrns$ cat /proc/$$/gid_map
0 1000 1
Source:
- https://man7.org/conf/meetup/understanding-user-namespaces--Google-Munich-Kerrisk-2019-10-25.pdf
- https://lwn.net/Articles/531114/
containers
https://www.redhat.com/sysadmin/podman-inside-container https://developers.redhat.com/blog/2019/01/15/podman-managing-containers-pods
capabilities
cgroups
Sources:
- https://lwn.net/Articles/604609/
- https://lwn.net/Articles/679786/
eBPF
BPF (Berkely packet filter) developed in 1992, improved the performance of packate capture tools. In 2013, a major rewrite of BPF was proposed and included in the Linux kernel in 2014. Which turned BPF into a general purpose execution engine that can be used for a variety of things. BPF allows the kernel to run mini programs on system and application events, such as disk I/O. BPF can be considered a virtual machine due to its virtual instruction set executed by the Linux kernel BPF runtime which includes a runtime & JIT compiler for turning BPF instructions into native instructions for execution. BPF instructions must pass a verifier that checks for safety, ensuring it does not crash the kernel. BPF has three main uses in Linux: networking, observability & security.
Tracing is event based recording, such as strace, tcpdump.
Sampling take a subset of measurements to paint a coarse picture of the target, also known as profiling or creating a profile. For example, sample every 10 milliseconds, this has less overhead, but can miss events.
Observability is understanding a system through observation. Tools for this include, tracing, sampling and tools based on fixed counters. Does not include bencmark tools, which modify the state of the system. BPF tools are observability tools.
BCC (BPF Compiler Collection) is the first higher-level tracing framework developed for BPF.
Bpftrace a newer front end and that provides a special-purpose, high level language for developing BPF tools. BPFtrace is for one liners, BCC for compile scripts.
Workload characterization defines what workload is being applied.
Dynamic instrumentation (kprobes & uprobes)
A BPF tracing source, which can insert instrumentation points into live software, zero overhead when not in use, as software is unmodified. Often used to instrument start and end of kernel / application functions. Downside of dynamic tracing is that functions can be renamed (interface stability issue).
Example:
probe | description |
---|---|
kprobe:vfs_read | instrument beginning of kernel vfs_read() |
kretprobe:vfs_read | instrument returns of kernel vfs_read() |
uprobe:/bin/bash:readline | instrument beginning of readline in /bin/bash |
uretprobe:/bin/bash:readline | instrument returns of readline in /bin/bash |
Static instrumentation (tracepoints and UDST)
Static instrumentation is added by developers and user-level statically defined tracing (UDST) for userspace programs.
Example:
tracepoint | description |
---|---|
tracepoint:syscall:sys_enter_open | instrument open(2) syscall |
udst:/usr/bin/mysqld:mysqld:query_stat | query_stat probe |
Listing all tracepoints matching sys_enter_open
:
bpftrace -l 'tracepoint:syscalls:sys_enter_open*'
Or snoop on exec
with execsnoop
:
sudo /usr/share/bcc/tools/execsnoop
BPF Technology background
BPF was originally developed to offload packet filtering to kernel space for tcmpdump. This provided performance and safety benefits. The classic BPF used was very limited and only supported 2 registers versus 10, 32 bit registers width versus 64; in eBPF more storage options, 512 bytes of stack space and infinite "map" storage lastly supports more event targets. BPF is useful for performance tools as it is build into Linux, efficient and safe. BPF is more flexible then kernel modules, BPF programs are checked via a verifier before running and it supports more rich data structures via maps. It is also easier to learn as it doesn't require kernel build artifacts. BPF programs can be compiled once and run everywhere.
BPF programs can be written with llvm, BCC and bpftrace. BPf instructions can be viewed via bpftool
and manipulate BPF objects including programs and maps.
BPF API
A BPF program can not call arbitrary kernel functions or read arbitrary memory, to accomplish this "helper" functions are provided as bpf_probe_read
. Memory access for BPF is restricted to it's registers and the stack, bpf_probe_read
can read arbitrary memory but it does some safety checks up front, it can also read userspace memory.
BPF Program Types
Program type specify the type of events that the BPF program attaches to in case of observability tools. The verifier uses the program type to restrict which kernel functions can be called and data structures to access.
BPF lacked concurrency until Linux 5.1, but tracing programs can't use it yet so a per CPU hash/map is used to keep track of event data and doesn't run into map overwrites or corruptions.
The BPF Type Format (BTF) is a metadata format that encodes debug information describing BPF programs, maps, etc. BTF is becoming a general purpose format for describing kernel data formats. Tracing tools require kernel headers installed to read / understand C structs otherwise they have to be defined in a BPF program.
BPF CO-RE (Compile Once, Run Everywhere)
Allow BPF programs to be compiled to BPF bytecode once and then packaged for other systems.
BPF sysfs interface
Linux 4.4 allows BPF programs and maps to be exposed over sysfs and allows the creation of persistent BPF programs to continue after the program that loaded them has exited. This is also called "pinning".
BPF limitations
- Cannot call arbitrary kernel functions
- No infinite loops allowed
- Stack size limited to MAX_BPF_STACK (512)
Stack trace walking
Stack traces are used to understand the code paths that led to an event. BPF can record stack traces; framepointer based or ORC based stack walks.
Frame pointer based
The head of the linked list of stack frames can always be found in a register (RBP on x86_64) where the return is stored of a known offset (+8) from the RBP. The debugger just walks over the linked list from the RBP. GCC nowadaysdefaults to omitting the stack frame pointer and uses RBP as a general purpose register.
Debuginfo
Usually available via debug packages which contain debug files in DWARF format. Debug files are big and BPF does not support them.
LBR (Last Branch Record)
An Intel processor feature to record branches in a hardware buffer including function call branches. This has no overhead and limited in depth per processor from 4-32 branches which may not be enough.
ORC (Ooops Rewind Capability)
New debug format for stack frames, uses ELF sections (.orc_unwind, .orc_unwind_ip) and has been implemented in the Linux kernel.
Flamegraphs
Visualize stack traces, a stack backtrace or call trace. For example:
func_c
func_b
func_a
Where a calls b, calls c. All different call trees are recorded by how often a code path is taken for example:
func_e
func_d func_c
func_b func_b func_b
func_a func_a func_a
1 2 7
+---------+
# func_e #
+---------+
+------------------+ +---------+
# func_c # # func_d #
+------------------+ +---------+
+--------------------------------+
# func_b #
+--------------------------------+
+--------------------------------+
# func_a #
+--------------------------------+
func_c uses 70% cpu time, func_e 10%.
Event sources
kprobes
Provide dynamic kernel instrumentation, can instrument any kernel function. When kretprobes are also used, function duration is also recorded. kprobes work by saving the target address and replacing it with a breakpoint instruction (int3 on x86_64) when instruction flow hits this breakpoint, the breakpoint handlers calls the kprobe handler afterwards the original instruction is executed. when kprobes are no longer needed the breakpoint is replaced by the original address. If ftrace already instruments the handler, ftrace simply calls the kprobe handler. When no longer used the ftrace kprobe handler is removed. For kretprobes, a kprobe entry is added to the function when called, the return address is saved and replaced with a "trampoline" function kretprobe_trampoline. When the function returns, CPU passes control to the trampoline function which calls the kretprobe hander. When no longer needed kprobe is removed.
This modifies kernel instruction text live, which means some functions are not allowed to be instrumented due to possible recursion. This does not work on ARM64 as kernel text is read only.
BPF can use kprobes via:
- BCC - attack_kprobe & attach_kretprobee
- bpftrace - krpboe & kretprobe
uprobes
User level dynamic instrumentation, same as kprobes but are file based, when a function in an executable is traced, all processes using that file now and in the future are traced.
BPF can use uprobes via:
- BCC - attach_uprobe & attach_uretprobe
- bpftrace - uprobe & uretprobe
tracepoints
Static kernel instrumentation, added by kernel developers as subsystem:eventname
. Tracepoints work by at compile time adding an noop instruction (5 byte) on x86_64, can later be replaced with a jmp
. A tracepoint handler trampoline is added to the end of the function which iterates over an array of registered tracepoint callbacks.
On enabling tracepoint, replace nop with jmp to tracepoint trampoline. Add an entry to the tracepoints callback array and sync RCU (read,copy,update). Removed drops array entry and if last rpelace the jmp with nop.
- BCC: TRACEPOINT_PROBE
- bpftrace: tracepoint probe type
BPF raw tracepoints (BPF_RAW_TRACEPOINT) creates a stable tracepoint without creating arguments so consumers have to handle raw arguments. It's a lot faster and allows consumers acccess to all arguments. The downside is that arguments might change.
UDST (User-level statically defined tracing)
Can be added by software via systemdtap-sclt-dev or facebook's folly which defines macros for instrumentation points.
PMC
Performance monitoring counters, programmable hardware counters on the processor. PMC modes:
- counting - keep trac of rate of events (kernel reads).
- overflow sampling - PMC sends interrupts to the kerne lfor the event they are monitoring.
Performance analysis
- latency - how long to accomplish a request or operation (in ms)
- rate -an operation or request rate per second
- throughput - typically data movmenet in bits or bytes / sec
- utilization - how busy a resource is over time as percentage
- cost - the price / performance ratio
Workload characterization, understand the applied workload:
- Who is causing the load? (PID, process, name, UID, IP Address)
- Why is the load called? (code path, stack trace, flame graph)
- What is the load? (IOPS, throughput, type)
- How is the load changing over time? (pre-interval summary)
Drill down analysis
Examing a metric, finding ways to decompose into components, and so forth.
- Start examing the highest level
- Examine next level details
- Pick the most interesting breakdown or clue
- If problem is unsolved, go back to step 2
USE metrics, Utilization, resource, errors.
60 second analysis:
- uptime - quick overview of load avg, three numbers are exponentially moving sum averages 1, 5, 15 minute constant
- dmesg | tail - shows OOM, TCP dropping request issues
- vmstat - virtual memory stats.
- R - processes running on CPU waiting for a turn (does not include disk I/O): R > cpu count => saturation.
- free - free memory in KBytes
- si/so - swap in & out =. non-zero out of memory.
- us, sy, id, wa and st: cputime on avg. across all cpu's, user, system time (kernel), idle, wait I/O and stolen time.
- mpstat -P ALL 1 - per cpu time broken down in stats. CPU0 => 100% user time => single threaded bottleneck.
- pidstat 1 - cpu usage per process rolling output.
- iostat -xz 1 - Storage device I/O metrics.
- r/s, w/s - delivered reads, writes to the device
- await - time spend waiting on I/O compeltion in ms
- aqu_sz - average number of requests issued to the device. > 1 can indicate saturation.
- %util - device utlization (busy %) > 60% usually means poor performance
- free -m - available memory not zero
- sar -n DEV 1 network device metrics
- sar -n TCP,ECTP 1 TCP metrics & errors:
- active/s - number of locally initiated TCP connections / sec
- passive/s - numbe of remotly initiated TCP connections / sec
- rtrans/s - number of retransmits / sec
BCC Tool checklist
- execsnoop - shows new process execution by printing one line of output for every execve(2)
- look for short lived processes often not seen by normal tools
- opensoop - prints one line of output for each open(2)
- ERR colomn shows files failed to open
- ext4slower - traces common operations from ext4 fs (reads,writes, opens, syncs) and prints those that exceed the limit (10ms)
- biolatency - traces disk I/O latency (time from device => completion) shown as histogram
- biosnoop - prints a line of output for each disk I/O with details including latency
- cachestat - prints a one line summary every second showing stats fro mthe FS cache
- tcpconnect - prints one line of output for every active TCP connection (connect)
- tcpaccept - prints one line of output for every passive TCP connection (accept)
- tcpretrans - prints one line of output for every TCP retransmit package
- runqlat - times how long threads were waiting for their turn on CPU. Longer than expected waits for CPU access can be identified.
- profile - CPU profiler, a tool to understand which code paths are consuming CPU resources. It takes samples of stac ktraces at timed intervals and prints an summary of unqiue stack traces + count.
Debugger
https://github.com/dylandreimerink/edb
Sources:
- https://lwn.net/Articles/740157/
- https://docs.cilium.io/en/v1.8/bpf/
- https://www.kernel.org/doc/html/latest/bpf/index.html
- BPF Performance Tools
io_uring
https://lwn.net/Articles/776703/ https://lwn.net/Articles/847951/ https://lwn.net/Articles/803070/ https://lwn.net/Articles/815491/ https://lwn.net/Articles/858023/ https://lwn.net/Articles/810414/
Rebuilds in CI
Goal: Test rebuilds for example for a new Python
Potential workflow:
pkgctl repo clone python
git checkout -b python-3.12
pkgctl build --repo $temp?
Every package in a todolist will be rebuild against this repo, optionall
checking out a python-3.12 branch if it exists. Needs either pkgctl
support
or special casing in CI.
AURWeb
Staging
- test data
- deploying staging env
MySQL
Setup
Document LXD setup
Javascript
- Replace typeahead
Python port
Templates
Translations
{% trans %}String{% endtrans %}
{{ "%sFoo Bar%s"
| tr
| format("arg1", "arg2")
| safe
}}
Testing
Performance
- RPC API
Benchmarking
- oha
- log mysql queries
Prometheus monitoring
Metrics:
- avg response time
- 95% percentile
- package prometheus-fastapi-instrumentator
- add to AURWeb
- Grafana dashboard
Postgresql
Upgrades
Changes for upgrade to the Python port.
Traceback (most recent call last):
File "/usr/lib/python3.9/configparser.py", line 789, in get
value = d[option]
File "/usr/lib/python3.9/collections/__init__.py", line 941, in __getitem__
return self.__missing__(key) # support subclasses that define __missing__
File "/usr/lib/python3.9/collections/__init__.py", line 933, in __missing__
raise KeyError(key)
KeyError: 'aurwebdir'
[jelle@aurweb.lxd][/srv/http/aurweb]%AUR_CONFIG=/etc/aurweb/config python -m aurweb.spawn
-------------------------------------------------------------------------------------------------------------------------------
Spawing PHP and FastAPI, then nginx as a reverse proxy.
Check out https://aur.archlinux.org
Hit ^C to terminate everything.
-------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/srv/http/aurweb/aurweb/spawn.py", line 171, in <module>
start()
File "/srv/http/aurweb/aurweb/spawn.py", line 109, in start
php_address = aurweb.config.get("php", "bind_address")
File "/srv/http/aurweb/aurweb/config.py", line 40, in get
return _get_parser().get(section, option)
File "/usr/lib/python3.9/configparser.py", line 781, in get
d = self._unify_values(section, vars)
File "/usr/lib/python3.9/configparser.py", line 1149, in _unify_values
raise NoSectionError(section) from None
configparser.NoSectionError: No section: 'php'
pytest-pacman
- test fixtures in relation with archweb
- try out vim coverage plugin Testing
export PYTHONPATH=/home/jelle/projects/pytest-pacman:build/lib.linux-x86_64-3.9:.
PYTEST_PLUGINS=pytest_pacman.plugin pytest --fixtures
table view
Drop jQuery tablesorter
https://www.kryogenix.org/code/browser/sorttable/sorttable.js
archweb
- archweb repository security status for packages in dev dashboards
- mirror signup form? Gitlab
- dark theme / css
- json output for dashboards for a Rust arch-package-status command!!
Dark mode
https://sparanoid.com/note/css-variables-guide/ https://lea.verou.me/2021/03/inverted-lightness-variables/ https://codesalad.dev/blog/color-manipulation-with-css-variables-and-hsl-16
Big improvements
- Mirror monitoring reminder emails
- Keycloak SSO
- Upstream SASS files
- Rest API
Small things
- todolist - add note support from staff (UX?)
- todolist - add /todo/json endpoint and filter on status
- detect untrusted / signed packages in archweb for example with zorun (old repo db)
- performance stale relations
- django performance
- rebuilderd-status tests -> mock requests
kuse arch-common-style with SASS
- django-sass
- django-compressor?
Hyperkitty uses SASS https://gitlab.com/mailman/hyperkitty/-/blob/master/hyperkitty.spec
https://ronald.ink/using-sass-django/ https://terencelucasyap.com/using-sass-django/ https://github.com/jrief/django-sass-processor https://github.com/django-compressor/django-compressor/ https://github.com/torchbox/django-libsass https://www.accordbox.com/blog/how-use-scss-sass-your-django-project-python-way/
Mirror out of date
https://github.com/archlinux/archweb/issues/142
Create a new page with a list of out of date mirrors with a button for mirror maintainers to send an email. With a different template per issue:
Keycloak
TODO
- Test groups
- Test updating/changing groups and relogging in
- Syncing groups/users periodicially
- Used the sso_accountid anywhere? Read OIDC docs about it / what happens when email changes in keycloak
- Test JavaScript XHR actions with OIDC
- do we implement filter_users_by_claims https://mozilla-django-oidc.readthedocs.io/en/stable/installation.html#connecting-oidc-user-identities-to-django-users
- Hide password change logic from developer profile
- Test Deny access for non Staff
- Fix logout, not logging out of keycloak if that is desirable
- Test new TU user login
- The "Release Engineering" group is obsolete in archweb
- Import sub ids for existing staff into archweb
- Add Release Maintainers to Keycloak and add the logic for it
- Onboard active testers to Keycloak, remove old testers
- Move ex-developers/trusted users/staff to the retired group
Sync users from Keycloak
Most likely we want to create a new openid client which has "realm-management roles" such as "query-groups, query-users, view-users" and can periodically auth and sync keycloak-sync https://www.keycloak.org/docs/latest/server_admin/#_service_accounts https://github.com/marcospereirampj/python-keycloak
Blocking bugs
- It's broken with latest requests: https://github.com/marcospereirampj/python-keycloak/issues/196
- Document service admin example: https://github.com/marcospereirampj/python-keycloak/issues/141
- Keycloak Rest API https://www.keycloak.org/docs-api/6.0/rest-api/index.html#_groups_resource
Self signed certificate issues with virtualenv
Fucking certifi not using the system CA bundle
# Your TLS certificates directory (Debian like)
export SSL_CERT_DIR=/etc/ssl/certs
# CA bundle PATH (Debian like again)
export CA_BUNDLE_PATH="${SSL_CERT_DIR}/ca-certificates.crt"
# If you have a virtualenv:
. ./.venv/bin/activate
# Get the current certifi CA bundle
CERTFI_PATH=`python -c 'import certifi; print(certifi.where())'`
test -L $CERTFI_PATH || rm $CERTFI_PATH
test -L $CERTFI_PATH || ln -s $CA_BUNDLE_PATH $CERTFI_PATH
Invalid redirect uri generated by archweb.. not https but http...
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://keycloak.lxd/auth/realms/archlinux/protocol/openid-connect/token
Configuration issue with using ./manage.py and resolved by setting SECURE_PROXY_SSL_HEADER
.
Devel queries
- /devel for flagged testing if the package is in testing
in_testing()
is run for every "My Flagged Packages" - /packages/stale_relations PackageRelation.last_update is called for every package doing one query - 140 queries in 2500 ms.
inactive_users
1200 ms -> removing
- Fix
relation.get_associated_packages
for all inactive user realtions, they trigger a query like:return Package.objects.normal().filter(pkgbase=self.pkgbase)
webseeds
We should be able to support webseeds again in magnets
wrong permissions
34 times calling for wrong_permissions
- Fix
relation.get_associated_packages
for all stale_relations, they trigger a query like:return Package.objects.normal().filter(pkgbase=self.pkgbase)
<td class="wrap">{{ relation.user.userprofile.allowed_repos.all|join:", " }}</td>
<td class="wrap">{{ relation.repositories|join:", " }}</td>
Calls for pagination.. for everything
- Inactive User Relations
- Non-existant pkgbases
- Maintainers with Wrong Permissions
98 similiar queries:
SELECT ••• FROM "packages" INNER JOIN "repos" ON ("packages"."repo_id" = "repos"."id") INNER JOIN "arches" ON ("packages"."arch_id" = "arches"."id") WHERE "packages"."pkgbase" = 'libg15render' ORDER BY "packages"."pkgname" ASC
arch common styles
Make the navbar menu resizable
Rest API
- Token auth for permission related requests
- Pagination
- Signoffs
- Search with multiple inputs (packages)
- Todo
- Packages
- Reports
django-rest-framework graphene-django django-graph-api django-restsql
Python packaging
Cleanups
- PEP517 MR's https://gitlab.archlinux.org/groups/archlinux/packaging/packages/-/merge_requests?scope=all&state=opened&search=PEP+517
- drop python-pytest-runner
- drop python-exceptiongroup => https://archlinux.org/todo/drop-python-exceptiongroup/
- drop more felix unmaintained packages
- drop python-nose
- drop python-pytest7
- drop python-six
- drop python-future
- drop using python-pytest-cov in our tests
- drop python-importlib-metdata
- drop python-importlib_resources
- fix packages with checkdepends without running check()
- build with PEP517
- update felix maintained packages like urllib3
- update packaging guidelines and remove nose
- backport package python-typing_extensions
- backport package python-unicodedata2 => https://fonttools.readthedocs.io/en/latest/optional.html#fonttools-unicode
-
python3.13 logger.warn:
rg -uuu -A 1 -B 1 "logging.warn\(" -g '*.py'
-
python3.13 deprecations
- logger.warn
- mailcap
- nnlt
remove python-py
- python-nox - dropped https://github.com/wntrblm/nox/commit/cdd0f3bdbd83f4d2e426b096750e281998ac4900
- python-pytest-aiohttp - does not depend on it?
- python-pytest-xprocess https://github.com/pytest-dev/pytest-xprocess/commit/1847ca771201229b3607dcbdf9f29c6becc50d83
- python-openpyxl
- python-execnet
- python-pytest-forked https://github.com/pytest-dev/pytest-forked/blob/master/setup.py#L22 - create bug report https://github.com/pytest-dev/pytest-forked/issues/88
- python-kubernetes
nose
Needs bug reports
- python-pyqrcode
- vigra
- python-mohawk
- python-sure
- python-django-haystack
- shadowsocks
cython0
- python-jq
- urh
- brltty
- python-grpcio-tools
- python-pyliblo
- vidcutter
- python-kivy
- python-asyncpg
- php-grpc
- python-basemap-common
- python-h5py-openmpi
- python-basemap
- python-bintrees
- python-h5py
- grpc
- cbindgen
- opendht
- python-pandas
- python-uvloop
- brltty-udev-generic
- python-yaml
- rdma-core
- python-pyscipopt
- python-pystemmer
- python-statsmodels
- python-grpcio
- grpc-cli
- python-pyarrow
- php-legacy-grpc
- python-hidapi
- python-pycapnp
Drop nosetest
Convert packages to use either pytest or python -m unittest
Drop pytest-coverage as checkdepends
Either use -o addopts=''
when possible or sed it out.
3.13 compat check
Read https://docs.python.org/3.13/whatsnew/3.13.html
Major release
Arch's Python modules store the version number in the module path meaning they won't be picked up by a new Python release for example 3.11 => 3.12.
- bump Python and rebuild it in a separate branch
- bootstrap
- find incompatible packages upfront?
Package: python-bootstrap-evil
Rebuild order
The python package repo has a script called genrebuild
this should include all packages required for the rebuild:
Figuring out the order: (TODO: exclude bootstraped build packages)
./genrebuild > rebuild-list.txt
cat rebuild-list.txt | xargs expac -Sv %n | sort | uniq > final.txt
For some reason our files.db include old packages which are no longer in the repos, arch-rebuild-order hard fails on missing packages so we clean those out with an ugly expac hack.
We can use arch-rebuild-order, it does not handle cyclic depenendencies but should be good enough (tm):
arch-rebuild-order --no-reverse-depends $(cat ./final.txt)
Python bootstrapping
Custom repository:
https://pkgbuild.com/~jelle/python3.11
cp /usr/share/devtools/pacman-staging.conf /usr/share/devtools/pacman-python.conf
Edit the config file and add above [staging]
[python]
SigLevel = Optional
Server = https://pkgbuild.com/~jelle/python3.11
sudo ln -s /usr/bin/archbuild /usr/bin/python-x86_64-build
repo-add python.db.tar.gz *.pkg.tar.zst
sudo python-x86_64-build -- -- --nocheck
Bootstrappping
- First build
python-bootstrap
(from svn-packages) with Python 3.X - Yeet the packages into a pacman repository
- Build flit-core with bootstrapped build and installer
- Build python-installer comment out the sphinx build and repo-add it
- Build python-packaging (requires build,installer,flit-core). HACK:
PYTHONPATH=src python -m build -nw
required by python-build! - Build python-build comment out the sphinx build and repo-add it
- Build python-pyproject-hooks and repo-add it
- build python-jaraco.text (requirement for bootstrap build of setuptools)
- build python-setuptools => bootstrap python-jaraco.text and tons more...
- Or build python-setuptools with export PYTHONPATH=/usr/lib/python3.10/site-packages/
- Wheel needs jaraco.functools and shit..
3.13
logging.warn
- afew - pr made
- pyzo -pr made
- gnome-tweak-tool - pr made
- python-fiona - PR made
- python-fido2 - PR made
- glusterfs - PR made
- fusesoc - PR made
- python-edalize - PR made
- klipper - PR made
- nss-pam-ldapd - PR made
- libvirt-python - MR made
- python-setuptools-gettext - PR made
- python-pipx => fixed upstream
- shadowsocks => needs felix patching
- offlineimap
- pmbootstrap
- python-gflags
- sugar
- sugar-toolkit-gtk3
- subversion
- bullet
- tensorboard
- csound
- python-py => dead package but needed for pytest-forked
- dart
- pycharm-community-edition
- tensorflow
- python-tensorflow-estimator
Crypt module deprecated
- cloud-init
- python-pytest-testinfra
- python-pyftpdlib
- sqlmap
- salt
- python-twisted
- samba
mailcap module deprecated
- alot
- visidata
cgi module removal
- electrum
- gpsd
- python-boto
- python-distlib
- python-eventlet
- python-lxml
- python-mako
- python-treq => fixed in git
- python-webob
- python-zeep
- rhythmbox
- python-repoze.profile
- python-setuptools
- python-softlayer-zeep
- python-pytorch
- salt
- root
- python-twisted
- python-webtest
- python-whoosh
- python-pygame-sdl2
- python-wslib
- python-oauth2client
- python-flup
- mysql-workbench
- python-wheezy-template
- python-oscrypto
- python-pyro
- python-paste
- python-openid
- python-owslib
- python-mimeparse
- python-iminuit
- python-formencode
- python-genshi
- python-falcon
- python-asn1crypto
- matrix-synapse
- python-requests-ftp
- python-htmlin
- python-formencode
- hotdoc
- xonsh
- boost
- deluge
- buildbot
- python-cherrypy
- pymol
electrum/src/Electrum-4.5.4/packages/pip/_vendor/distlib/compat.py
474: from cgi import escape
bluefish/src/bluefish-2.2.15/data/bflib/bflib_python_2.3.xml
22797:Begin by writing import cgi. Do not use from cgi import
ninja/src/ninja-1.12.0/src/browse.py
38: from cgi import escape
nuitka/src/Nuitka-2.1.5/nuitka/build/inline_copy/tqdm/tqdm/notebook.py
69: from cgi import escape
python-boltons/src/boltons/boltons/tableutils.py
58: from cgi import escape as html_escape
67: from cgi import escape as html_escape
pycharm-community-edition/src/intellij-community/python/helpers/virtualenv-20.24.5.pyz: binary file matches (found "\0" byte around offset 5)
pycharm-community-edition/src/intellij-community/python/helpers/virtualenv-20.13.0.pyz: binary file matches (found "\0" byte around offset 5)
pycharm-community-edition/src/intellij-community/python/helpers/typeshed/stubs/WebOb/webob/request.pyi
13:from cgi import FieldStorage
pycharm-community-edition/src/intellij-community/python/helpers/typeshed/stubs/WebOb/webob/multidict.pyi
3:from cgi import FieldStorage
python-cherrypy/src/cherrypy-18.9.0/cherrypy/lib/httputil.py
15:from cgi import parse_header
python-boto/src/boto-2.49.0.20190327/boto/ecs/item.py
28: from cgi import escape as html_escape
python-cheetah3/src/python-cheetah3/Cheetah/Tests/Regressions.py
2: from cgi import escape as html_escape
python-boto/boto-python-3.8.patch
14:+ from cgi import escape as html_escape
python-distlib/src/python-distlib/distlib/compat.py
480: from cgi import escape
python-future/src/future-1.0.0/docs/compatible_idioms.rst
1246: from cgi import escape
python-future/src/future-1.0.0/docs/notebooks/Writing Python 2-3 compatible code.ipynb
2767: "from cgi import escape\n",
python-future/src/future-1.0.0/docs/build/html/_sources/compatible_idioms.rst.txt
1246: from cgi import escape
python-genshi/src/genshi-0.7.7/examples/bench/basic.py
8:from cgi import escape
python-gevent/src/gevent-24.2.1/examples/webproxy.py
20: from cgi import escape
python-htmlmin/src/htmlmin-220b1d16442eb4b6fafed338ee3b61f698a01e63/htmlmin/escape.py
33: from cgi import escape
python-lxml/src/lxml-lxml-5.1.0/src/lxml/html/diff.py
14: from cgi import escape as html_escape
python-lxml/src/lxml-lxml-5.1.0/src/lxml/doctestcompare.py
45: from cgi import escape as html_escape
python-lxml/src/lxml-5.1.0/src/lxml/html/diff.py
14: from cgi import escape as html_escape
python-lxml/src/lxml-5.1.0/src/lxml/doctestcompare.py
45: from cgi import escape as html_escape
python-markdown2/src/python-markdown2-2.4.12/perf/recipes.pprint
97: {'comment': u'I am completely new to python,\n\nI want to test if python is working on my hosting account. I am told it is!\n\nI have copied the text at the bottom of my post, i have then pasted it to notepad, saved it as ptest.cgi and secondly ptest.txt\n\nuploaded it to my cgi-bin, inside a folder called python, chomod 755\nthen go to my page: http://yourfeet.co.uk/cgi-bin/python/ptest.cgi\n\nor http://yourfeet.co.uk/cgi-bin/python/ptest.txt\n\n\nboth give errors in log as : Premature end of script headers\n\nWhat am i missing out please\n.\n.\n.\n\n\n\n\n#!/usr/local/bin/python\nprint "Content-type: text/html"\nprint\nprint "<pre>"\nimport os, sys\nfrom cgi import escape\nprint "Python %s" % sys.version\nkeys = os.environ.keys()\nkeys.sort()\nfor k in keys:\n print "%s\\t%s" % (escape(k), escape(os.environ[k]))\nprint "</pre>" ',
python-nltk/src/nltk-3.8.1/nltk/tree/prettyprinter.py
25: from cgi import escape
python-numpy/src/numpy-1.26.4/tools/npy_tempita/__init__.py
44: from cgi import escape as html_escape
python-pipenv/src/pipenv-2023.12.1/pipenv/patched/pip/_vendor/distlib/compat.py
474: from cgi import escape
python-odfpy/src/odfpy-release-1.4.2/contrib/odf2epub/odf2epub
25:from cgi import escape
python-pip/src/pip-24.0/src/pip/_vendor/distlib/compat.py
480: from cgi import escape
python-odfpy/src/odfpy-release-1.4.2/contrib/html2odt/shtml2odt.py
25:from cgi import escape,parse_header
python-odfpy/src/odfpy-release-1.4.2/contrib/html2odt/html2odt.py
25:from cgi import escape,parse_header
python-pystache/src/pystache-0.6.5/pystache/defaults.py
15: from cgi import escape
python-repoze.profile/src/repoze.profile-2.3/repoze/profile/compat.py
41: from cgi import parse_qs
python-sentry_sdk/src/sentry-python-1.45.0/sentry_sdk/utils.py
27: from cgi import parse_qs # type: ignore
supervisor/src/supervisor-4.2.5/supervisor/compat.py
141: from cgi import escape
python-testrepository/src/testrepository-0.0.21/lib/python3.11/site-packages/pip/_vendor/distlib/compat.py
480: from cgi import escape
python-wheezy-template/src/wheezy.template-3.2.2/demos/bigtable/bigtable.py
432:#from cgi import escape
[jelle@natrium][/mnt/arch/python-packaging]%rg -uuu "unittest.findTestCases" -g '*.py' .
./cython0/src/cython-0.29.37.1/runtests.py
1535: suite.addTest(unittest.findTestCases(sys.modules[cls]))
./cython/src/cython/runtests.py
1809: suite.addTest(unittest.findTestCases(sys.modules[cls]))
./cython/src/cython-3.0.10/runtests.py
1809: suite.addTest(unittest.findTestCases(sys.modules[cls]))
./python/src/Python-3.12.3/Lib/unittest/loader.py
490: "unittest.findTestCases() is deprecated and will be removed in Python 3.13. "
./python/src/Python-3.12.3/Lib/test/test_unittest/test_loader.py
1526: suite = unittest.findTestCases(m,
./python/src/Python-3.12.2/Lib/unittest/loader.py
490: "unittest.findTestCases() is deprecated and will be removed in Python 3.13. "
./python/src/Python-3.12.2/Lib/test/test_unittest/test_loader.py
1526: suite = unittest.findTestCases(m,
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_web.py
182: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_templating.py
1790: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_supervisord.py
839: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_supervisorctl.py
2072: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_states.py
56: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_socket_manager.py
253: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_rpcinterfaces.py
2401: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_poller.py
443: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_options.py
3809: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_loggers.py
604: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_http.py
689: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_events.py
513: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_end_to_end.py
424: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_dispatchers.py
1232: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_confecho.py
18: return unittest.findTestCases(sys.modules[__name__])
./supervisor/src/supervisor-4.2.5/supervisor/tests/test_childutils.py
138: return unittest.findTestCases(sys.modules[__name__])
./python-pylint/src/pylint-3.1.0/pylint/checkers/stdlib.py
262: "unittest.findTestCases",
./python-pyscard/src/pyscard-2.0.8/smartcard/test/scard/testsuite_scard.py
47: testsuite_scard.addTest(unittest.findTestCases(module))
./python-pyscard/src/pyscard-2.0.8/smartcard/test/frameworkpcsc/testsuite_frameworkpcsc.py
40: testsuite_framework.addTest(unittest.findTestCases(module))
./python-pyscard/src/pyscard-2.0.8/smartcard/test/framework/testsuite_framework.py
55: testsuite_framework.addTest(unittest.findTestCases(module))
./python-pyserial/src/pyserial-3.5/test/run_all_tests.py
40: testsuite = unittest.findTestCases(module)
./python-pyscard/src/pyscard-2.0.8/smartcard/test/framework/testcase_CardConnection.py
246: suite1 = unittest.makeSuite(testcase_CardConnection)
./python-pyscard/src/pyscard-2.0.8/smartcard/test/framework/testcase_Card.py
119: suite1 = unittest.makeSuite(testcase_CardConnection)
./python-pyscard/src/pyscard-2.0.8/smartcard/test/framework/testcase_CAtr.py
119: suite1 = unittest.makeSuite(testcase_CAtr)
./python-pyscard/src/pyscard-2.0.8/smartcard/test/framework/testcase_ATR.py
67: suite1 = unittest.makeSuite(testcase_ATR)
./python-wstools/src/wstools/tests/test_wsdl.py
38: suite.addTest(unittest.makeSuite(WSDLToolsTestCase, 'test_'))
- yt-dlp
- mitmproxy
- ocrfeeder
- polymake
- python-future
- python-kivy
- python-mediafile
- python-pgpy
- python-tweepy
- ranger
[jelle@natrium][/mnt/arch/python-packaging]%rg -uuu "import imghdr" -g '*.py' .
./ocrfeeder/src/ocrfeeder/src/ocrfeeder/util/graphics.py
25:import imghdr
./polymake/src/polymake-4.11/resources/jupyter-polymake/jupyter_kernel_polymake/kernel.py
8:import imghdr
./python-future/src/future-1.0.0/src/future/backports/email/mime/image.py
12:import imghdr
./python-kivy/src/Kivy-2.2.1/kivy/core/image/__init__.py
65:import imghdr
./python-mediafile/src/mediafile/mediafile.py
52:import imghdr
./python-pgpy/src/python-pgpy/pgpy/constants.py
5:import imghdr
./python-tweepy/src/tweepy-4.14.0/tweepy/api.py
7:import imghdr
- bandit
- mercurial
- pychess
- python-boto
- libsvm
- python-curio
- python-pyocd
- routersploit
[jelle@natrium][/mnt/arch/python-packaging]%rg -uuu "import telnetlib" -g '*.py' .
./bandit/src/bandit-1.7.7/tests/functional/test_functional.py
240: """Test for `import telnetlib` and Telnet.* calls."""
./bandit/src/bandit-1.7.7/examples/telnetlib.py
1:import telnetlib
./jython/src/Lib/test/test_telnetlib.py
2:import telnetlib
./libsvm/src/libsvm-332/tools/grid.py
317: import telnetlib
./mercurial/src/mercurial-6.7.2/tests/test-demandimport.py
165:import telnetlib
194: import telnetlib
./peda/src/peda-1.2/lib/skeleton.py
180:import telnetlib
./pychess/src/pychess-1.0.5/lib/pychess/ic/TimeSeal.py
3:import telnetlib
./pychess/src/pychess-1.0.4/lib/pychess/ic/TimeSeal.py
3:import telnetlib
./python-boto/src/boto-9e1cd3bd76e738d80630f1bd9160fd87c8eab865/tests/integration/ec2/test_connection.py
30:import telnetlib
./python-boto/src/boto-2.49.0.20190327/tests/integration/ec2/test_connection.py
30:import telnetlib
./python-curio/src/curio/curio/monitor.py
58:import telnetlib
./routersploit/src/routersploit-3.4.2/routersploit/core/telnet/telnet_client.py
1:import telnetlib
./routersploit/src/routersploit-3.4.2/routersploit/core/exploit/shell.py
2:import telnetlib
./python-pudb/src/pudb-2022.1.3/pudb/remote.py
142: import telnetlib as tn
./python-pylint/src/pylint-3.1.0/tests/functional/a/access/access_attr_before_def_false_positive.py
9:import telnetlib
./python-pyocd/src/pyOCD-0.36.0/test/unit/test_semihosting.py
23:# import telnetlib
pipes
./canto-curses/src/canto-curses-0.9.9/canto_curses/command.py
17:import pipes
./hamster-time-tracker/src/hamster-3.0.3/waflib/Utils.py
587: import pipes
./hamster-time-tracker/src/hamster-3.0.3/waflib/extras/genpybind.py
2:import pipes
./dnf/src/dnf-4.19.0/dnf/pycomp.py
84: import pipes
./displaycal/src/DisplayCAL-3.9.12/DisplayCAL/worker_base.py
8:import pipes
./displaycal/src/DisplayCAL-3.9.12/DisplayCAL/worker.py
15:import pipes
./buildbot/src/buildbot/worker/buildbot_worker/runprocess.py
91: import pipes # pylint: disable=import-outside-toplevel
./clang/src/clang-17.0.6.src/utils/creduce-clang-crash.py
18:import pipes
./jupyter-nbclassic/src/nbclassic-1.0.0/setupbase.py
15:import pipes
./ldb/src/ldb-2.9.0/third_party/waf/waflib/Utils.py
587: import pipes
./ldb/src/ldb-2.9.0/third_party/waf/waflib/extras/genpybind.py
2:import pipes
./kupfer/src/kupfer-326/waflib/Utils.py
587: import pipes
./kupfer/src/kupfer-325/waflib/Utils.py
587: import pipes
./kicad/src/kicad/thirdparty/sentry-native/external/crashpad/build/run_tests.py
19:import pipes
./waf/src/waf-2.0.27/waflib/Utils.py
587: import pipes
./waf/src/waf-2.0.27/waflib/extras/genpybind.py
2:import pipes
./jython/src/Lib/test/test_pipes.py
1:import pipes
./virt-manager/src/virt-manager-4.1.0/virtManager/object/domain.py
1309: import pipes
./tevent/src/tevent-0.16.1/third_party/waf/waflib/Utils.py
587: import pipes
./tevent/src/tevent-0.16.1/third_party/waf/waflib/extras/genpybind.py
2:import pipes
./mysql-workbench/src/mysql-workbench-community-8.0.36-src/plugins/wb.admin/backend/wb_server_management.py
30:import pipes
./mercurial/src/mercurial-6.7.2/tests/test-verify-repo-operations.py
39:import pipes
./tensorboard/src/embedded_tools/tools/objc/j2objc_dead_code_pruner.py
32:import pipes # swap to shlex once on Python 3
./tensorboard/src/tensorboard-2.15.1/tensorboard/manager_e2e_test.py
23:import pipes
./meld/src/meld/meld/melddoc.py
20:import pipes
./tensorboard/src/tensorboard-2.15.1/tensorboard/tools/diagnose_tensorboard.py
32:import pipes
./tdb/src/tdb-1.4.10/third_party/waf/waflib/Utils.py
587: import pipes
./tdb/src/tdb-1.4.10/third_party/waf/waflib/extras/genpybind.py
2:import pipes
./talloc/src/talloc-2.4.2/third_party/waf/waflib/Utils.py
587: import pipes
./talloc/src/talloc-2.4.2/third_party/waf/waflib/extras/genpybind.py
2:import pipes
./samba/src/samba-4.20.0/third_party/waf/waflib/Utils.py
587: import pipes
./samba/src/samba-4.20.0/third_party/waf/waflib/extras/genpybind.py
2:import pipes
./python-dbus-deviation/src/dbus-deviation-0.6.1/dbusdeviation/utilities/vcs_helper.py
37:import pipes
./python-fire/src/fire-0.6.0/fire/trace.py
32:import pipes
./python-fire/src/fire-0.6.0/fire/core.py
59:import pipes
./python-humanfriendly/src/python-humanfriendly-10.0/humanfriendly/testing.py
28:import pipes
./python-humanfriendly/src/python-humanfriendly-10.0/humanfriendly/cli.py
82:import pipes
./python-iminuit/src/python-iminuit-root/interpreter/llvm-project/clang/utils/creduce-clang-crash.py
17:import pipes
./sagemath/src/sage-10.3/build/sage_bootstrap/flock.py
15:import pipes
./python-iminuit/src/python-iminuit/extern/root/interpreter/llvm-project/clang/utils/creduce-clang-crash.py
17:import pipes
./root/src/root-6.30.04/interpreter/llvm-project/clang/utils/creduce-clang-crash.py
17:import pipes
./python-nodeenv/src/nodeenv-1.8.0/nodeenv.py
26:import pipes
./python-nodeenv/src/nodeenv-1.8.0/tests/nodeenv_test.py
5:import pipes
./reprotest/src/reprotest/reprotest/lib/adt_testbed.py
27:import pipes
./reprotest/src/reprotest/reprotest/lib/VirtSubproc.py
34:import pipes
./python-tensorboard_plugin_wit/src/embedded_tools/tools/objc/j2objc_dead_code_pruner.py
32:import pipes # swap to shlex once on Python 3
cgitb
[jelle@natrium][/mnt/arch/python-packaging]%rg -uuu "import cgitb" -g '*.py' .
./krita/src/krita-5.2.2/plugins/extensions/pykrita/plugin/krita/excepthook.py
14:import cgitb
./kajongg/src/kajongg-24.02.2/src/mainwindow.py
16:import cgitb
./playitslowly/src/playitslowly-1.5.1/playitslowly/myGtk.py
380: import cgitb
./python/src/Python-3.12.3/Lib/cgitb.py
5: import cgitb; cgitb.enable()
./python/src/Python-3.12.3/Lib/test/test_cgitb.py
44: ('import cgitb; cgitb.enable(logdir=%s); '
60: ('import cgitb; cgitb.enable(format="text", logdir=%s); '
./python/src/Python-3.12.2/Lib/cgitb.py
5: import cgitb; cgitb.enable()
./python/src/Python-3.12.2/Lib/test/test_cgitb.py
44: ('import cgitb; cgitb.enable(logdir=%s); '
60: ('import cgitb; cgitb.enable(format="text", logdir=%s); '
./virtualbox/src/VirtualBox-7.0.16/src/VBox/ValidationKit/testmanager/webui/wuibase.py
951: import cgitb;
./virtualbox/src/VirtualBox-7.0.16/src/VBox/ValidationKit/testmanager/webui/wuiadmin.py
43:import cgitb;
./virtualbox/src/VirtualBox-7.0.16/src/VBox/ValidationKit/testmanager/core/webservergluecgi.py
43:import cgitb;
./virtualbox/src/VirtualBox-7.0.16/src/VBox/ValidationKit/testmanager/core/webservergluebase.py
43:import cgitb
./python-flup/src/flup-1.0.3/flup/server/ajp_base.py
978: import cgitb
./python-flup/src/flup-1.0.3/flup/server/scgi_base.py
570: import cgitb
./python-flup/src/flup-1.0.3/flup/server/fcgi_base.py
1235: import cgitb
./python-openid/src/python3-openid-3.2.0/examples/server.py
12:import cgitb
./python-openid/src/python3-openid-3.2.0/examples/consumer.py
14:import cgitb
./python-paste/src/Paste-3.5.3/paste/cgitb_catcher.py
12:import cgitb
./python-pygame-sdl2/src/pygame_sdl2/test/util/build_page/libs/pywebsite/__init__.py
7:import cgitb
./python-pygame-sdl2/src/pygame_sdl2/test/util/build_page/libs/build_client/update.py
9:import cgitb
./scribus/src/scribus-1.6.1/scribus/plugins/scripter/python/excepthook.py
10:import cgitb
ossaudiodev
10:import cgitb
[jelle@natrium][/mnt/arch/python-packaging]%rg -uuu "import ossaudiodev" -g '*.py' .
./pysolfc/src/PySolFC-2.21.0/pysollib/pysolaudio.py
380: # import ossaudiodev
404: import ossaudiodev
415: import ossaudiodev
435: import ossaudiodev
463: import ossaudiodev
./python-nltk/src/nltk-3.8.1/nltk/corpus/reader/timit.py
436: import ossaudiodev
if sys.version_info >= (3, 10):
import importlib.metadata as importlib_metadata
else
import importlib_metadata
Parsing metadata
Parsing METADATA
from Python package
import sys
from packaging.metadata import parse_email, Metadata
raw, unparsed = parse_email(metadata)
parsed = Metadata.from_raw(raw)
pyversion = f"{sys.version_info.major}.{sys.version_info.minor}"
environment = {'python_version': pyversion}
deps = []
for dep in parsed.requires_dist:
if dep.marker is None:
deps.append(dep.name)
continue
if 'python_version' in str(dep.marker) and dep.marker.evaluate(environment)
deps.append(dep.name)
# do something with extra, by filling in the "extra" env
But we also need to detect:
https://github.com/abravalheri/validate-pyproject/blob/5ea862ffb5a31f4611813f223a1f1c0977661196/src/validate_pyproject/remote.py#L14
Upower
https://gitlab.freedesktop.org/upower/upower/-/merge_requests/49 https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/1135 https://gitlab.gnome.org/Teams/Design/settings-mockups/-/blob/master/power/power.png https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/1461 https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/1461
Battery charge limits / profiles
Hacking
meson setup --prefix /tmp --libexecdir lib --sbindir bin --reconfigure build
meson compile -C build
- Figure out how to read udev env variables in upower
- Check if the steamdeck supports battery charge limits
- ChargeLimitEnabled property which also can be set by the client, and monitored for changes
- Save the state if battery limitting is enabled in /var/lib/upower/battery_saving as the embedded controller, might not save the start/stop threshold and resetting the bios/battery at 0% might reset it.
- Figure out how to expose the dbus option, so a property
- LG/Asus/Toshiba only have end charge limits
- Ask the Valve guys about power charge limitting interests (likely not due to gamescope / KDE) can it be done in firmware
- Hack control-center to read UPower properties and setting
-
Investigate Surface BIOS it supports setting
Enable Battery Limit Mode
, which limits charging to 80%. - When upower sets the charge limits it should read then back as not all hardware support arbitrary percentages. FIXME? Required?
- Mail Arvid Norlander lkml@vorpal.se if Toshiba laptops have a start limit
- leaf requies GNOME to know what is up (Battery status). Depends on Alan / GNOME Design descision
- borrow framework and write EC charge_control_support
- wCPO Asus https://www.asus.com/us/support/FAQ/1032726/ suggests 58/60%
- LG only has 80 and 100 as end charge limit
- Toshiba only has 80 and 100 as end charge limit
- Extend the kernel API to show valid charge options?
- Add an allowed values? sysfs entry for charge_control_end_limits [80 100] => send a RFC to the mailing list
- Add documentation for the LG/ASUS/Toshiba stop thresholds special cases
- Ask Dell about exposting charge_control_*_threshold's
- Framework driver https://github.com/DHowett/framework-laptop-kmod
Gnome Settings
busctl set-property org.freedesktop.UPower /org/freedesktop/UPower/devices/battery_BAT0 org.freedesktop.UPower.Device ChargeThresholdEnabled b true
UPower Git issues
-
On startup
charge-threshold-supported
is FALSE, after I toggle a dbus setting we re-read it and it's true this should be read on startup!!! - Implement the switch functionality, setting the DBus property
- Removes lid handling 07565ef6a1aa4a115f8ce51e259e408edbaed4cc as systemd does it? What should gnome do? https://gitlab.freedesktop.org/upower/upower/-/merge_requests/5#note_540149
$ busctl get-property org.freedesktop.login1 /org/freedesktop/login1 org.freedesktop.login1.Manager LidClosed
b false
15:50:57 hansg | Hmm, mutter depend ook op upower for LID change monitoring, maar alleen via DBUS. Dus ik zou zeggen reverten met de change die LID support dropped van upower voor nu. Tot dat
| er een alternatief is. (alternatief is waarschijnlijk LID-switch support toevoegen aan libinput en dan mutter dat laten gebruiken en gnome-control-center het aan mutter laten
| vragen ..,)
- Unrelated, but we could easily implement a Linger property changed Just emit here?
static int method_set_user_linger(sd_bus_message *message, void *userdata, sd_bus_error *error) {
_cleanup_(sd_bus_creds_unrefp) sd_bus_creds *creds = NULL;
- Handle get_devices changes => REVERT
- How do we get the current charge levels into the translated text label?
GTK tutorial/intro
Follow up
- framework charge_control settings
- multiple USB keyboard's with backlight laptop + USB
- Dell privacy screen switch state to libinput to mutter (ask Hans for hardware)
- USB re-pluggable usb keyboard backlight support in upower (ask Hans for hardware)
- Steam Deck enhancements? https://gitlab.freedesktop.org/upower/upower/-/issues/245
meson warnings
Build targets in project: 34
NOTICE: Future-deprecated features used:
* 0.60.0: {'install_subdir with empty directory'}
* 1.1.0: {'"boolean option" keyword argument "value" of type str'}
Battery charge limit
New idea, use hwdb for profiles
Match if /sys/class/power_supply/*
/etc/udev/rules.d/60-battery.rules
ACTION=="remove", GOTO="battery_end"
# sensor:<model_name>:dmi:<dmi pattern>
SUBSYSTEM=="power_supply", KERNEL=="BAT*", \
IMPORT{builtin}="hwdb 'battery:$attr{model_name}:$attr{[dmi/id]modalias}'", \
GOTO="battery_end"
LABEL="battery_end"
/etc/udev/hwdb.d/60-battery.rules
battery:*:dmi:*
CHARGE_LIMIT=docked;20;60
battery:*:dmi:*T14s*
CHARGE_LIMIT=docked;50;80
Like /usr/lib/udev/hwdb.d/60-sensor.hwdb
Testing is done with udevadm
udevadm test /sys/class/power_supply/BAT0
Hard loading
sudo systemd-hwdb update --strict || echo 'could not parse succesfully'
sudo udevadm trigger -v -p /sys/class/power_supply/BAT0
udevadm info -q all /sys/class/power_supply/BAT0
- multi battery laptop, we should also allow match on "BAT*"
- systemd PR for hwdb.d/parse_hwdb.py ?!
To add local entries, create a new file /etc/udev/hwdb.d/61-battery-local.hwdb
systemd-hwdb update
udevadm trigger -v -p DEVNAME=/dev/iio:deviceXXX
[docked]
start=50
stop=80
[travel]
start=100
end=100
[conservative]
start=20
end=60
/etc/upower/battery.d/20-lenovo
[docked]
start=30
end=90
dmi=T14sGen1 & T14S
Multiple batteries
[lenovo-bat0]
start=30
end=90
dmi=Lenovo
battery=BAT0
[lenovo-bat1]
start=50
end=90
dmi=Lenovo
battery=BAT1
Where dmi
is a glob match on /sys/class/dmi/id/modalias
so *T14sGen1*
battery
is an entry in /sys/class/power_supply
DBus API
a{s}
of battery limit profiles
- Enable()
- Supported Property?
- Start Property
- End Property
Caveats
- Not all laptops/hardware supports the same settings so setting on 80 might be 90.
Enum
To not show too many profiles at once we should maybe just support an Enum of modes and every profile has a mode entry and in theory we could "extend" this in the future.
- low power
- docked
- travel
- conservative?
Steamdeck
https://0x0.st/HGAi.sh
Supports it through a non-standard knob max_battery_charge_level
, kernel source and driver code.
- how do I write a proper driver which uses charge_control_end_threshold
- how do other drivers do this?
- how does this get to power_supply?
- how is the power_supply class created?
Framework
Forum Post about the EC with charge limit mention Blog post about EC and charge limit Official framework EC code Charge limit code? Charge controller chip Datasheet
Setting min charge threshold
- Contact dhowett about mainlining the kernel driver
No min. percentage available Does it require just setting SYSTEM_BBRAM_IDX_CHG_MIN to support? Does hardware understand that? seems _MIN is totally not supported?
The embedded controller is a MEC1701, but what is the charge control chip?
Kernel Driver
Some patches have been submitted and merged to support the Framework cros_ec LPC driver. However Framework extended the Chromebook? EC code with charge control which is not merged into the kernel
So our kernel driver should use the EC controller and obtain a reference to it somehow, a driver that interacts with the chromeOS EC is
drivers/power/supply/cros_usbpd-charger.c
See for example a function which calls an EC command:
static int cros_usbpd_charger_ec_command(struct charger_data *charger,
For a charge limit driver we likely need to write a driver similiar to msi-ec.c
in drivers/platform/x86
, something like drivers/platform/x86/framework-ec.c
Questions:
- How do we bind the EC controller? And do we need too?
- It's a module_platform_driver, how does that determine when it needs to be loaded or is this some DeviceTree thing? The cros_usbpd driver does it like this, but how does that work?
static int cros_usbpd_charger_probe(struct platform_device *pd)
{
struct cros_ec_dev *ec_dev = dev_get_drvdata(pd->dev.parent);
struct cros_ec_device *ec_device = ec_dev->ec_dev;
wmi_ec uses ec_read
which in turn calls acpi_ec_read. The driver binds based on DMI strings.
- What should the framework-ec driver use?
Mainline OpenRazer power bits so upower works out of the box
A user made a bug report to support openrazer's sysfs attributes for charging/battery readout. This driver sadly exports custom sysfs attributes while it should implement a power_supply such as the logitech hid-logitech-hidpp.c
driver which upower can automatically pick up
- Obtain hardware with an USB dongle, bluetooth might work out of the box?
- How does one implement a power_supply driver?
- Contact upstream about mainlining, note that the driver exposes far more stuff over sysfs which likely can't be mainlined nor do I have interest in.
Battery Calibration
After Charge limits we should consider working on battery calibration. To inhibit the system, we can use a systemd dbus call just like cc-color-calibrate
does in gnome-control-center
.
UPower instead should probably talk to systemd to to inhibit and it works as following:
UPower adds:
- BAT0->Calibrate()
- BAT0->IsCalibrating = bool
User calls BAT0->Calibrate() We set IsCalibrating = true We inhibit the current session https://www.freedesktop.org/wiki/Software/systemd/inhibit/ We disallow changing the charge limits We disable charge limits We 'full-discharge' to /sys/class/power_supply/BAT0/charge_behaviour We keep track of where we are, so discharge X% => 0% and then 0% => 100% is a full calibration Once completed we set isCalibrating to False.
BTRFS Support
Cockpit btrfs support, initial read only support has landed with create/delete subvolume support on the way. Some missing features in general are:
- UDisks improvements
- Further tests
- Resize/Grow support
- Multi device support
- Adding a new device to a filesystem
- Remove a device from a filesystem
- Volume creation support
- RAID1/RAID5 etc.
- Snapshots support
- Quota support
- Robistifcation of libblockdev
UDisks Improvements
- use udisks for listing subvolumes (GetSubvolumes) currently does not work well for us due to MountPoints notes
- use udisks CreateSubvolume, issue (does not work as it always selects the first mountpoint)
- use udisks CreateSnapshot, issue (does not work as it always selects the first mountpoint)
- use udisks DeleteSubvolume issue (does not work as it always selects the first mountpoint and no recursive removal support)
- reproduce the issue below and create a good bug report
Bugs
Repeatedly adding/removing a device to a volume either loses an udev event or udisks does not pick up a udev event.
btrfs device add /dev/mapper/vgroup0-lvol1 /mnt/data; date
btrfs device remove /dev/mapper/vgroup0-lvol1 /mnt/data; date
This generates a udev event, it's udisks which no longer knows!
Further tests
Test setting a different default subvolume in btrfs and see how Cockpit handles this.
Resize / Grow support
Should be exposed by UDisks, see dbus docs.
Multi device support
CreateVolume exists in UDisks and should be implemented like LVM in Cockpit. It works a bit different then LVM in that metadata and data can have different raid profiles.
Finding out what multi device profile was selected can only be done via:
btrfs --format json filesystem df $mountpoint
For options see mkfs.btrfs --help
A device can be added or removed with AddDevice
and RemoveDevice
but currently we can't detect when a drive is missing or obtain health stats from the "array".
UDisks does not know about missing devices see, btrfs filesystem show
does but it is hard to parse:
echo >/sys/block/sdx/device/delete
Label: 'fedora-test' uuid: cece4dd8-6168-4c88-a4a8-f7c51ed4f82b
Total devices 3 FS bytes used 2.08GiB
devid 1 size 11.92GiB used 3.56GiB path /dev/vda5
devid 2 size 0 used 0 path /dev/sda MISSING
devid 3 size 512.00MiB used 0.00B path /dev/sdc
In LVM in UDisks this is shows as VolumeGroup => MissingPhysicalVolumes readable as
-
Teach libblockdev to expose the data and metadata profile
-
Expand libblockdev with filesystem information using
btrfs --format json filesystem $mountpoint
-
Expand libblockdev with filesystem information using
-
Teach udisks to expose missing disks
- Expand libblockdev BDBtrfsDeviceInfo with missing bool field
- Teach libblockdev bd_btrfs_list_devices to detect missing devices
- Read up if btrfs exposes any health stats about the array and if it was synced
-
Research if an array needs to be balanced when a device is added as this does not happen automatically
- Implement btrfs balance in libblockdev
- Expose balance start/end in UDisks, requires a job API
libbtrfsutil
C test program
./configure --build=x86_64-pc-linux
gcc -O0 -ggdb -I /home/jelle/projects/btrfs-progs -L /home/jelle/projects/btrfs-progs -lbtrfsutil test.c
LD_LIBRARY_PATH=/home/jelle/projects/btrfs-progs ./a.out
Running build Python module:
LD_LIBRARY_PATH=/home/jelle/projects/btrfs-progs PYTHONPATH=libbtrfsutil/python valgrind --leak-check=full python3 test.py
Snapshots support
We currently list all subvolumes, so also snapshots. Cockpit should display snapshots and regular subvolumes different and also check if they are readonly or not.
-
Listing snapshots different, how do we identify if it is a snapshot can we only differentiate between
btrfs subvolume list -s
and without? -
Snapshot creation
-
Extend the create subvolume or create a new menu entry for
Create snapshot
- Add option to create a readonly snapshot
- can't use UDisks for this as it suffers from the same issue as CreateSubvolume (getting the first mount point)
-
Extend the create subvolume or create a new menu entry for
- Snapshot deletion - should be the same as a normal subvolume removal, just needs tests
Quota support
- Learn about quotas
-
Expose quotas via libblockdev
- Create quota group support
- Create subvolume allow setting quotas
Robistifcation of libblockdev
Use libbtrfsutil where possible instead of shelling out btrfs
.
-
Port create/delete subvolume to libbtrfsutil
-
Use
btrfs_util_delete_subvolume
-
Extend delete with a new flag for
BTRFS_UTIL_DELETE_SUBVOLUME_RECURSIVE
-
Use
btrfs_util_create_subvolume
-
Use
- Port create_snapshot to libbtrfsutil
- Port listing subvolumes to libbtrfsutil
-
libbtrfsutil extending
-
extend libbtrfsutil with per device or volume information like
btrfs filesystem show
and/or data/metadata - add addDevice/removeDevice support if allowed
- add createVolume support if allowed
- setLabel support
-
extend libbtrfsutil with per device or volume information like
Old notes
Subvolumes
Just directories under a subvolume
default subvolume id [5] is the ultimate root of every btrfs filesystem
Usually mounted as:
UUID=a280b604-6023-4ba5-bb9e-80d612f84b0d /home btrfs subvol=home,compress=zstd:1 0 0
A proper subvolume has always inode number 256. If a subvolume is nested and then a snapshot is taken, then the cloned directory entry representing the subvolume becomes empty and the inode has number 2.
Udisks
Snapshots
-
How to create them? btrfs subvolume snapshot $subvolume $target
-
How to mount them?
-
How to identify them? Snapshots are basically subvolumes but with initial contents
-
Different types of snapshots? btrfs has read only and read/write snapshots
Then be set on creation with
-r
or with a property btrfs property set /root/@snapshots/6oct-1 ro true -
How do we identify a rw/readonly snapshot btrfs property get /root/@snapshots/6oct-1 ro
multiple disks
- Should cockpit balance for you? (udisks does not)
- What modes should we offer? raid0/raid1/raid01?
PCP
Performance Co-Pilot provides historical system metrics. PCP stores metrics in archives, in /var/log/pcp/pmlogger/$(hostname)
.
All metrics are identified by an PMID (Performance Metric identifier)
Each metric is part of a certain domain typedef unsigned long pmInDom;
except for single value instances those are always PM_INDOM_NULL
.
Examle multi value metric (instances):
$ pminfo -f filesys.free
filesys.free
inst [0 or "/dev/mapper/system"] value 472018336
inst [1 or "/dev/nvme0n1p1"] value 371764
Single value metric:
$ pminfo -f mem.freemem
mem.freemem
value 3015252
Obtaining the metrics from archive is used done creating a "handle" with
pmNewContext
. The collection time can be set to an arbitrary time with
pmSetMode
. The to be fetched instances can be restricted with pmAddProfile
and pmDelProfile
.
Performance metric description
Metadata of a metric described in pmDesc struct
describes the format and semantics.
/* Performance Metric Descriptor */
typedef struct {
pmID pmid; /* unique identifier */
int type; /* base data type (see below) */
pmInDom indom; /* instance domain */
int sem; /* semantics of value (see below) */
pmUnits units; /* dimension and units (see below) */
} pmDesc;
The types
/* pmDesc.type - data type of metric values */
#define PM_TYPE_NOSUPPORT -1 /* not in this version */
#define PM_TYPE_32 0 /* 32-bit signed integer */
#define PM_TYPE_U32 1 /* 32-bit unsigned integer */
#define PM_TYPE_64 2 /* 64-bit signed integer */
#define PM_TYPE_U64 3 /* 64-bit unsigned integer */
#define PM_TYPE_FLOAT 4 /* 32-bit floating point */
#define PM_TYPE_DOUBLE 5 /* 64-bit floating point */
#define PM_TYPE_STRING 6 /* array of char */
#define PM_TYPE_AGGREGATE 7 /* arbitrary binary data */
#define PM_TYPE_AGGREGATE_STATIC 8 /* static pointer to aggregate */
#define PM_TYPE_EVENT 9 /* packed pmEventArray */
#define PM_TYPE_UNKNOWN 255 /* used in pmValueBlock not pmDesc */
Cockpit-pcp does not support PM_TYPE_AGGREGRATE, PM_TYPE_EVENT
Semantics describe how Cockpit should represent the data:
/* pmDesc.sem - semantics of metric values */
#define PM_SEM_COUNTER 1 /* cumulative count, monotonic increasing */
#define PM_SEM_INSTANT 3 /* instantaneous value continuous domain */
#define PM_SEM_DISCRETE 4 /* instantaneous value discrete domain */
The C code doesn't do anything with this information except return it back to
the client in the meta message. However the derive
== rate option requires
the bridge to calculate the sample rate based on the last value and the
provided interval
.
PCP Archive source
The metrics1 channel supports passing a source=pcp-archive
or source=/path/to/archive
, the latter likely introduced for testing. Archive specific options from docs/protocol.md
:
-
"metrics" (array): Descriptions of the metrics to use. See below.
-
"instances" (array of strings, optional): When specified, only the listed instances are included in the reported samples.
-
"omit-instances" (array of strings, optional): When specified, the listed instances are omitted from the reported samples. Only one of "instances" and "omit-instances" can be specified.
-
"interval" (number, optional): The sample interval in milliseconds. Defaults to 1000.
-
"timestamp" (number, optional): The desired time of the first sample. This is only used when accessing archives of samples.
This is either the number of milliseconds since the epoch, or (when negative) the number of milliseconds in the past.
The first sample will be from a time not earlier than this timestamp, but it might be from a much later time.
-
"limit" (number, optional): The number of samples to return. This is only used when accessing an archive.
Reading data from archive
# Obtain an archive, this can be multiple if a path is given to say /var/log/pcp/pmlogger/hostname
context = pmapi.pmContext(c_api.PM_CONTEXT_ARCHIVE, '/path/to/archive')
# Get the internal metric ids for the user provided metrics
pmids = context.pmLookupName('mock.value')
# Get the descriptions, this is used for scaling values if required
descs = context.pmLookupDescs(pmids)
results = context.pmFetch(pmids)
for i in range(results.contents.numpmid):
atom = context.pmExtractValue(results.contents.get_valfmt(i),
results.contents.get_vlist(i, 0),
descs[0].contents.type,
c_api.PM_TYPE_U32)
print(f"#mock.value={atom.ul}")
Debugging
cpf open metrics1 source="/tmp/pytest-of-jelle/pytest-current/timestamps-archives0/" metrics='[{ "name": "mock.value" }]' timestamp=1688162400000 limit=1 : wait | G_MESSAGES_DEBUG=none /usr/lib/cockpit/cockpit-pcp | /usr/bin/cat
Unit tests
- Test limitting the data, so generate a 1000 record archive (limit option in the metrics1 channel)
-
Different types of data, currently only testing U32. Cockpit requests "kernel.all.cpu.nice" (with a
derive: "rate"
), "mem.physmem", "swap.pagesout", -
Test omit-instances
{ name: "network.interface.total.bytes", derive: "rate", "omit-instances": ["lo"] }
- Test multi value metrics (which have "instances" like network.interface.total.bytes)
- Test passing timestamps ie. load timestamp
-
Test passing
instances
- Test sample interval changes
Questions
- Why do we need to read archive per archive? The API supports reading all for us.
- Is it be of error handling?
- Is it because of limitting
- Is it because of the start timestamp?
References
Files
To Do
Re-design
- Sidebar design update
Symlinks
- What happens when I double click a symlink to .., it moves to the symlink. Expected?
- What happens when I change the permissions of
- Creation
- Removal
- Cut/Copy and paste behaviour
Copy / Paste
https://github.com/cockpit-project/cockpit-files/issues/467
- No indication of big copies
- You can copy things from different directories leading to unexpected UX
Cut / Paste
- This is a move, what happens with permissions.
- This is a move, cross filesystem is a copy
Upload
- Upload as superuser
- Design tweaks
Rename
- Make it atomic with renameate2
- Make it also work on NFS and filesystems who don't support it
Table
Columns
-
Missing tests for sorting on column in view
-
Configuring column visibility
-
Showing permissions
-
Showing filetypes
Permissions
Recursive applying
Show SELinux context
In the sidebar
Show ACL's
- Needs research
- Recursive applying
Questions
- Symlinks when navigating / changing permissions
- Column visibility
- Copy / paste can be done everywhere and never forgets
- Copy / paste indicator
Mkosi for Arch Boxes
Questions
-
how do we make profiles?
- profile for basic
- profile for cloud-init
- profile for vagrant and vagrant-libvirt
-
how do we make btrfs partition of 40GB for basic
-
how do I add a user with sudo? => https://github.com/DaanDeMeyer/mkosi/blob/main/mkosi/resources/mkosi.md#frequently-asked-questions-faq
-
how do I cache the package manager step
-
how do I create a qcow2 after the final step? => mkosi.postoutput
-
do we need to remove /etc/machine-id?
-
How do we apply
sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT=.*/GRUB_CMDLINE_LINUX_DEFAULT=\"rootflags=compress-force=zstd\"/' "${MOUNT}/etc/default/grub"
-
Can we boot test in CI? Check if swap is enabled etc.? Systemd units run?
systemctl --failed
is green?
Resources
- https://btrfs.readthedocs.io/en/latest/Swapfile.html
- https://swsnr.de/archlinux-rescue-image-with-mkosi/
- https://noise.getoto.net/2024/01/10/a-re-introduction-to-mkosi-a-tool-for-generating-os-images/
Requirements
- mkosi
- systemd-ukify
- grub
- btrfs-progs
Creation
cd projects/arch-boxes/cloud-init-image
mkdir mkosi.{cache,output}
mkosi build
Issues
Traceback (most recent call last):
File "/home/jelle/projects/mkosi/mkosi/run.py", line 60, in uncaught_exception_handler
yield
File "/home/jelle/projects/mkosi/mkosi/run.py", line 101, in fork_and_wait
target(*args, **kwargs)
File "/home/jelle/projects/mkosi/mkosi/__init__.py", line 4637, in run_build
build_image(Context(args, config, workspace=workspace, resources=resources))
File "/home/jelle/projects/mkosi/mkosi/__init__.py", line 3833, in build_image
install_grub(context)
File "/home/jelle/projects/mkosi/mkosi/__init__.py", line 1460, in install_grub
grub_mkimage(context, target="i386-pc", modules=("biosdisk",))
File "/home/jelle/projects/mkosi/mkosi/__init__.py", line 1371, in grub_mkimage
assert mkimage
^^^^^^^
AssertionError
- Lack of grub on my system, but assert could tell me that :)
Should mkosi qemu
read BiosBootLoader options and then use mkosi --qemu-firmware bios qemu
How to exit QEMU? Maybe add ctrl+] support like nspawn?
modifications in mkosi qemu
stay, seems somewhat unexpected