Jelle's notes

A collection of public notes on various topics generated using mdbook

Reproducible Builds

  • Python issues due to tests?: https://reproducible.archlinux.org/api/v0/builds/342940/diffoscope
  • Java jar generation in libs

Java JAR

gradle maven

Fedora

https://github.com/rpm-software-management/mock/issues/692 - clamp timestamps https://github.com/rpm-software-management/rpm/pull/1532 - build info file

  • try to reproduce cockpit with mockbuild

https://github.com/fepitre/rpmreproduce

flatpak

https://fedoramagazine.org/an-introduction-to-fedora-flatpaks/ https://blogs.gnome.org/mclasen/2018/07/07/flatpak-making-contribution-easy/ https://ranfdev.com/blog/flatpak-builds-are-not-reproducible/ https://github.com/flatpak/flatpak-builder/issues/251 https://gitlab.com/freedesktop-sdk/freedesktop-sdk/-/issues/1320

  • diffoscope support?
  • CI on flathub repositories?
  • reproducing

Diffing a flatpak

For Cockpit, comparing the build dir output

flatpak-builder --disable-cache  --disable-rofiles-fuse --force-clean flatpak-build-dir1  org.cockpit_project.CockpitClient.yml
flatpak-builder --disable-cache  --disable-rofiles-fuse --force-clean flatpak-build-dir2  org.cockpit_project.CockpitClient.yml
diffoscope flatpak-build-dir1 flatpak-build-dir2

Comparing using two repos:

flatpak-builder --repo=repo1 --disable-cache  --disable-rofiles-fuse --force-clean flatpak-build-dir  org.cockpit_project.CockpitClient.yml
flatpak-builder --repo=repo2 --disable-cache  --disable-rofiles-fuse --force-clean flatpak-build-dir  org.cockpit_project.CockpitClient.yml

Get the refs from ostree:

ostree refs --repo=repo1
ostree show --repo=repo1 runtime/org.cockpit_project.CockpitClient.Debug/x86_64/devel
ostree show --repo=repo2 runtime/org.cockpit_project.CockpitClient.Debug/x86_64/devel

Confirm the ContentChecksum is the same.

live iso

Reproducible live iso

Issues

  • libopensmtpd - mandoc has a "$Mdocdate$" variable which does not respect SOURCE_DATE_EPOCH
  • hugin - gzip timestamps
  • pcp - gzip timestamp
  • libkolabxml XML ordering https://git.kolab.org/T2642 https://bugzilla.opensuse.org/show_bug.cgi?id=1060506 try to set XERCES_DEBUG_SORT_GRAMMAR, but that needs to be in xerces-c which is kinda untested and dumb
  • mm-common
  • musescore https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope-results/musescore3.html
  • openpmix PMIX_CONFIGURE_HOST
  • perl-crypt-random-tesha2 don't advertise entropy
  • ssr records $USER and $date
  • libgtop records uname
  • openxr script is not reproducible.
  • php phar timestamps
  • namazu records $(hostname)
  • dosemu timestamps
  • echoping hostname
  • python-lxml-docs timestamp in "Generated On"
  • ant-doc javadoc adds timestamp to documentation. Generated by javadoc (14.0.2) on Sun Nov 15 16:33:44 UTC 2020
  • emelfm2 kernel + timestamp
  • libiio timestamp
  • gajim man pages (gzip) and pyc bytecode
  • fs-uae zip file not ordered? permission? zip issues?!
  • gutenprint uname/ timestamp recording
  • libmp4v2 timestamp
  • gdk-pixbuf2-docs order issue in generated documentation
  • ghostpcl timestamp
  • libgxps timestamp
  • netcdf & netcdf-fortran uname
  • nethack build date
  • python-lxml timestamp in generated docs
  • qastools gzip timestamp (https://gitlab.com/sebholt/qastools/)
  • qtikz sqlite database with datetime difference in TimeStampTable
  • rmlint - gzip timestamp and timestamp in rmlint
  • glhack - timestamp
  • glob2 - timestamp
  • docker - timestamp
  • radamsa - needs a rebuild
  • eq10q - needs a rebuild
  • harvid needs a rebuild due to size issues with an older makepkg version (fails to build)
  • colord binary seems to embed the profile data as a random hash?
  • tbb timestamp, build host and build kernel
  • ruby-colorize timestamp in gemspec
  • rebuild ruby-* packages which do not remove "$pkgdir/$_gemdir/gems/$_gemname-$pkgver/ext" as it contains non-reproducible files.
  • i7z - gzip timestamp
  • openmpi - records hostname
  • v2ray-domain-list-community - geosite.dat not ordered
  • unrealircd - timestamp in binary
  • libcec - hostname/timestamp
  • hevea - ocaml build /tmp/$tmp path differs https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=786913
  • mari0 - zip file
  • arj - date https://reproducible.archlinux.org/api/v0/builds/118386/diffoscope
  • ibus - date
  • argyllcms - (date) - https://www.freelists.org/list/argyllcms send email about created date containing hours/minutes/second and SOURCE_DATE_EPOCH
  • dd_rescue - man page gz timestamp => mail maintainer https://sourceforge.net/p/ddrescue/tickets/
  • deepin-wallpapers => most likely order issue with the wildcard in the makefile nope, most likely image-blur is not reproducible

openexr reproducer

python specification/scripts/genxr.py -registry specification/registry/xr.xml -o /home/jelle/projects/OpenXR-SDK-Source/build/include/openxr/   openxr_reflection.h

Man page gzip timestamp issue

Fixing all the gzip timestamp issue packages is a lot of work and patching upstream everywhere is not really doable. An idea might be to detect gzip files which are non-reproducible and let a makepkg option like zipman or extend zipman to take care of this.

touch foo
gzip foo
file bar.gz | grep modified &>/dev/null  && gunzip -c bar.gz | gzip -9 -n -c > test.gz

Haskell packages

Try to build them without !strip and then compare the packages.

https://gitlab.haskell.org/ghc/ghc/-/wikis/deterministic-builds https://gitlab.haskell.org/ghc/ghc/-/issues/12935

Ideas

  • Year blog post
  • Documentation about reproducible builds in the packager wiki / packaging wiki

Package pacman in Debian

 -> sudo pbuilder create
 -> sudo cowbuilder create
 -> sudo gbp buildpackage --git-ignore-new --git-pbuilder -nc

rebuilderd-website

  • Improve loading performance
  • add make install target

Python issues

For pyc differences PYTHONHASHSEED can be set to a fixed value to try and circumvent the random hash initialisation getting embedded in pyc files

For test files being show in the diffoscope results as pyc files and not in the rebuild package the issue is probably that pyc files generated by running tests are installed errorsnly. Exporting PYTHONDONTWRITEBYTECODE=1 when running the tests.

Rebuilderd

Rebuilderd doesn't clean up old builds, to remove all builds which are no longer references to a package:

delete from builds where id not in (select build_id from packages where build_id is not null);

Rebuilderd also stores logs for succeeded builds which isn't required.

Requeue'ing bad builds can be done as following:

rebuildctl pkgs requeue --suite core --status BAD

Improvements

  • add build date to output of rebuildctl pkgs ls --status BAD --suite core
  • add build date to the /log output
  • add build host to the /log output (so one can identify if a host has a bad build env)
  • add a cleanup thread that runs occasionally cleaning up old rebuild results.

Autoclassify script

Make an autoclassify script based on the diffoscope html output stored in rebuilderd. Maybe using the rebuilderd database for now => extract the diffoscope html and inspiration drawn from this script

Twitter bot

Twitter bot for notifications about reproducible builds in IRC and allowing tweets from irc.

Recipes

Quiche

  • bacon strips
  • broccoli
  • champignons
  • rasped cheese
  • 4 eggs
  • 200ml cooking cream

Pancakes

  • 300 gram flour
  • 1 teaspoon salt
  • 2 eggs
  • 500 ml milk
  • 30 gram butter

Practice

  • Songs
  • Music Theory
  • Scales

Songs

Holy Ghost Fire riff

E|--------------------------------------|
B|--------------------------------------|
G|--------------------------------------|
D|-----------5-------------0------------|
A|------5_7~---7-5-----0^2---5---5^7~---|
E|-3^0-------------7_3---------7--------|

Picking

https://www.soundslice.com/slices/7jHcc/

Scales

Minor Pentatonic Scale

E|---------------------5-8-------------|
B|-----------------5-8-----------------|
G|-------------5-7---------------------|
D|---------5-7-------------------------|
A|-----5-7-----------------------------|
E|-5-8---------------------------------|

Major scale

e|---------------------------4-5-|
B|-----------------------5-7-----|
G|-----------------4-6-7---------|
D|-----------4-6-7---------------|
A|-----4-5-7---------------------|
E|-5-7---------------------------|

Minor Scale

E|-----------------------------5-7-8-|
B|-----------------------5-6-8-------|
G|-----------------4-5-7-------------|
D|-------------5-7-------------------|
A|-------5-7-8-----------------------|
E|-5-7-8-----------------------------|

Theory

Chords, Progressions & Keys Triads Fretboard Chords of a key Chord Theory

Chords of a key

G Major scale

G A B C D E F#

The 4th note is a half step and the 7th note is half step.

Learning Ardour

  • ardour 6 quickstart
  • How to monitor my recording tracks properly
  • How to make mono recording stereo
  • Learn recording hotkeys in ardour

Hedgedoc

  • Style frontpage

Configuration

/etc/webapps/hedgedoc/config.json

{
    "production": {
    	"sessionSecret": "laPah7ohSheeroo4yep5shi7ioghie",
	"email": false,
        "domain": "archtest.lxd",
        "loglevel": "debug",
	"protocolUseSSL": true,
	"allowAnonymous": false,
        "hsts": {
            "enable": true,
            "maxAgeSeconds": 31536000,
            "includeSubdomains": true,
            "preload": true
        },
        "csp": {
            "enable": true,
            "directives": {
            },
            "upgradeInsecureRequests": "true",
            "addDefaults": true,
            "addDisqus": false,
            "addGoogleAnalytics": false
        },
        "cookiePolicy": "lax",
        "db": {
            "dialect": "sqlite",
            "storage": "/var/lib/hedgedoc/db.hedgedoc.sqlite"
        },
        "linkifyHeaderStyle": "gfm"
    }
}

/etc/webapps/hedgedoc/sequelizerc

var path = require('path');

module.exports = {
    'config':          path.resolve('config.json'),
    'migrations-path': path.resolve('lib', 'migrations'),
    'models-path':     path.resolve('lib', 'models'),
    'url':             'sqlite:///var/lib/hedgedoc/db.hedgedoc.sqlite'
}

Nginx

location / {
	proxy_pass http://127.0.0.1:3000;
	proxy_set_header Host $host;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_set_header X-Forwarded-Proto $scheme;
}

location /socket.io/ {
	proxy_pass http://127.0.0.1:3000;
	proxy_set_header Host $host;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_set_header X-Forwarded-Proto $scheme;
	proxy_set_header Upgrade $http_upgrade;
	proxy_set_header Connection $connection_upgrade;
}

Keycloak

Keycloak instructions

systemctl hedgedoc service override

CMD_OAUTH2_USER_PROFILE_URL=https://archkeycloak.lxd/auth/realms/archlinux/protocol/openid-connect/userinfo
CMD_OAUTH2_USER_PROFILE_USERNAME_ATTR=preferred_username
CMD_OAUTH2_USER_PROFILE_DISPLAY_NAME_ATTR=name
CMD_OAUTH2_USER_PROFILE_EMAIL_ATTR=email
CMD_OAUTH2_TOKEN_URL=https://archkeycloak.lxd/auth/realms/archlinux/protocol/openid-connect/token
CMD_OAUTH2_AUTHORIZATION_URL=https://archkeycloak.lxd/auth/realms/archlinux/protocol/openid-connect/auth
CMD_OAUTH2_CLIENT_ID=hedgedoc
CMD_OAUTH2_CLIENT_SECRET=23829d32-e820-4d03-8c5d-7a6b996daec0
CMD_OAUTH2_PROVIDERNAME=Keycloak
CMD_DOMAIN=archtest.lxd
CMD_PROTOCOL_USESSL=true 
CMD_URL_ADDPORT=false

golang

Project ideas

  • golang dns client using RDAP with json output

Start a project

go mod init github.com/jelly/$project

Common modules

  • cobra
  • logrus

Types

  • slice []string{"lala", "lolol"};
  • string
  • bool

Gotchas

Go executes init functions automatically at program startup, after global variables have been initialized.

Type assertions

var greeting interface{} = "hello world"
greetingStr, ok := greeting.(string)
if !ok {
	fmt.Println("not asserted")
}

Type asertions can only take place on interfaces, on our first line we assign a string to the interface greeting. While greeting is a string now, the interface exposed to us is a string. To return the original type of greeting we can assert that it is a string using greeting.(string).

If you are not sure of the type of an interface a switch can be used:

var greeting interface{} = 42

switch g := greeting.(type) {
	case string:
		fmt.Println("string of length", len(g))
	case int:
		fmt.Println("integer of value", g)
	case default:
		fmt.Println("no idea what g is")
}

This all is called an assertion, as the original type of greeting (interface) is not changed.

Type conversions

var greeting := []byte("hello world")
greetingStr := string(greeting)

In Golang a type defines:

  1. How the variable is stored (underlying data structure)
  2. What you can do wit hthe variable (methods/ functions it can be used in)

In Golang one can define it's own type

// myInt is a new type who's base type is `int`
type myInt int

// The AddOne method works on `myInt` types, but not regular `int`s
func (i myInt) AddOne() myInt { return i + 1}

func main() {
	var i myInt = 4
	fmt.Println(i.AddOne())
}

As a myInt uses a similiar data structure underneath, we can convert a myInt to an int.

var i myInt = 4
originalInt := int(i)

This means, types can only be converted if the underlying data structure is the same.

declaring variables

There are two ways to declare variables in golang (Go infers the type from initiailization)

  1. using the var keyword var int foo = 4
  2. using a short declaration operator (:=) foo := 4

Differences:

var keyword:

  • used to declare and initialize the variables inside and outside of functions
  • the scope can therefore be package level or global level scope or local scope
  • declaration and initialization of the variables can be done separately
  • optionally can put a type with the decleration

short decleration operator:

  • used to declare and initialize the variable only inside the functions
  • variables has only local scope as they can only be declared in functions
  • decleration and initialization of the variables must be done at the same time
  • there is no need to put a type

struct

Named structs

type Employee struct {
	firstName string
	lastName string
	age int
}

func main() {
	emp1 := Employee{
		firstName: "Sam",
		lastName: "Anderson",
		age: 25,
	}
	// Zero value of a struct, all fields with be 0 or ""
	var emp2 Employee
}

Anonymous struct

foo := struct {
	firstName string,
	lastName string,
}{
	firstName: "Steve",
	lastName: "Jobs",
}

Pointers to a struct

emp1 := &Employee{
	firstName: "Steve",
	lastName: "Jobs",
}

fmt.Println("First Name:", (*emp1).firstName);
fmt.Println("First Name:", emp1.firstName);
Anonymous fields

It is possible to create structs with fields that contain only a type without the field name. Even thought they have no explicit name, by default the name of an anonymous field is the name of its type.

type Person struct {
	string
	int
}
Nested structs
type Address struct {  
    city  string
    state string
}

type Person struct {  
    name    string
    age     int
    address Address
}

Fields that belong to an anonymous struct field in a struct are called promoted fields since they can be accessed as if they belong to the struct which holds the anonymous struct field.

type Address struct {  
    city string
    state string
}
type Person struct {  
    name string
    age  int
    Address
}

func main() {  
    p := Person{
        name: "Naveen",
        age:  50,
        Address: Address{
            city:  "Chicago",
            state: "Illinois",
        },
    }

    fmt.Println("Name:", p.name)
    fmt.Println("Age:", p.age)
    fmt.Println("City:", p.city)   //city is promoted field
    fmt.Println("State:", p.state) //state is promoted field
}

Structs equality

Structs are value types and are comparable if each of their fields are comparable. Two struct variables are considered equal if their corresponding fields are equal.

Interface

An interface is a set of methods and a type.

For structs

An interface is a placeholder for a struct which implements it's functions, which can be used to allow a a method to take an interface as argument.

package main

import (
    "fmt"
    "math"
)

type geometry interface {
    area() float64
    perim() float64
}

type rect struct {
    width, height float64
}

func (r rect) area() float64 {
    return r.width * r.height
}
func (r rect) perim() float64 {
    return 2*r.width + 2*r.height
}

type circle struct {
    radius float64
}

func (c circle) area() float64 {
    return math.Pi * c.radius * c.radius
}
func (c circle) perim() float64 {
    return 2 * math.Pi * c.radius
}

func measure(g geometry) {
    fmt.Println(g)
    fmt.Println(g.area())
    fmt.Println(g.perim())
}

func main() {
    r := rect{width: 3, height: 4}
    c := circle{radius: 5}

    measure(r)
    measure(c)
}

The interface{} type

The interface{} type, the empty interface has no methods. This means that any function which takes interface{} value as parameter, you can supply that function with any value.

package main

import (
    "fmt"
)

func checkType(i interface{}) {
    switch i.(type) {          // the switch uses the type of the interface
    case int:
        fmt.Println("Int")
    case string:
        fmt.Println("String")
    default:
        fmt.Println("Other")
    }
}

func main() {
    var i interface{} = "A string"

    checkType(i)   // String
}

Equality of interface values

An interface is equal if they are both nil or the underlying value and the type are equal.

package main
 
import (
    "fmt"
)
 
func isEqual(i interface{}, j interface{}) {
    if(i == j) {
        fmt.Println("Equal")
    } else {
        fmt.Println("Inequal")
    }
}
 
func main() {
    var i interface{}
    var j interface{}
     
    isEqual(i, j)   // Equal
     
    var a interface{} = "A string"
    var b interface{} = "A string"
     
    isEqual(a, b)   // Equal
}

goroutines

modules?

context

Security checklist

Checklists from certifiedsecure.com

Server configuration checklist

Mark result with ✓ or ✗

#Certified Secure Server Configuration ChecklistResultRef
1.0Generic
1.1Always adhere to the principle of least privilege
2.0Version Management
2.1Install security updates for all software
2.2Never install unsupported or end-of-life software
2.3Install software from a trusted and secure repository
2.4Verify the integrity of software before installation
2.5Configure an automatic update policy for security updates
3.0Network Security
3.1Disable all extraneous services
3.2Disable all extraneous ICMP functionality
3.3Disable all extraneous network protocols
3.4Install a firewall with a default deny policy
3.5Firewall both incoming and outgoing connections
3.6Disable IP forwarding and routing unless explicitly required
3.7Separate servers with public services from the internal network
3.8Remove all dangling DNS records
3.9Enable DNS record signing
4.0Authentication and Authorization
4.1Configure authentication for access to single user mode
4.2Configure mandatory authentication for all non-public services
4.3Configure mandatory authorization for all non-public services
4.4Configure mandatory authentication for all users
4.5Enforce the usage of strong passwords
4.6Remove all default, test, guest and obsolete accounts
4.7Configure rate limiting for all authentication functionality
4.8Disable remote login for administrator accounts
4.9Never implement authorization based solely on IP address
5.0Privacy and Confidentiality
5.1Configure services to disclose a minimal amount of information
5.2Transmit sensitive information via secure connections
5.3Deny access to sensitive information via insecure connections
5.4Store sensitive information on encrypted storage
5.5Never use untrusted or expired SSL certificates
5.6Configure SSL/TLS to accept only strong keys, ciphers and protocols
5.7Configure an accurate and restrictive CAA DNS record
5.8Use only widely accepted and proven cryptographic primitives
5.9Use existing, well-tested implementations of cryptographic primitives
5.10Separate test, development, acceptance and production systems
5.11Never allow public access to test, development and acceptance systems
5.12Never store production data on non-production systems
5.13Configure a secure default for file permissions
5.14Configure file permissions as restrictive as possible
5.15Disable the indexing of files with sensitive information
5.16Configure automated removal of temporary files
6.0Logging Facilities
6.1Restrict access to logging information
6.2Configure logging for all relevant services
6.3Configure logging for all authentication and authorization failures
6.4Configure remote logging for all security related events
6.5Routinely monitor and view the logs
6.6Never log sensitive information, passwords or authorization tokens
7.0Service Specific
7.1Complete the Secure Development Checklist for Web Applications
7.2Disable open relaying for mail services
7.3Disable email address enumeration for mail services
7.4Disable anonymous uploading for FTP services
7.5Disable unauthorized AXFR transfers in the DNS
8.0Miscellaneous
8.1Configure rate limiting for all resource-intensive functionality
8.2Prevent unintended denial of service when configuring rate limiting
8.3Check configuration of all services for service-specific issues
8.4Check for and mitigate server- or setup-specific problems

Tools

  • mdcat cat for markdown
  • httpie HTTP client
  • taskell CLI kanboard
  • oxipng PNG optimizer written in Rust
  • mdbook command line tool to create books using Markdown
  • diffoscope diff on steroids
  • fzf fuzzy finder
  • tmux
  • inotify-tools
  • tig

Releasing

Benchmarking

  • oha http load benchmark tool

Load test with 50 requests/second for 2 minutes

oha https://example.org -q 50 -z 2m

Development

inotifywait

npm run watch
while true; do inotifywait -r dist | while read r; do scp dist/* c:/usr/share/cockpit/certificates/; done; done

Certificates / CA

step-cli certificate create root-ca root-ca.crt root-ca.key --profile root-ca
step certificate install root-ca.crt
# General client cert
step-cli certificate create  $(hostname -f) server.crt server.key --san $(hostname -f) --san $(hostname -s) --profile leaf --ca ./root-ca.crt --ca-key ./root-ca.key --no-password --insecure --not-after "$(date --date "next year" -Iseconds)"

Docs

  • tldr - cheatsheets for cli tools

General vim tricks

  • calculations: in insert mode, press C-r = then insert your calculation
  • resizing panes Ctrl+w + and Ctrl + -

Required packages

  • fzf - fzf plugin
  • the_silver_searcher -searching in files for the fzf plugin
  • cargo / rust - rust LSP integration
  • pyright - Python LSP integration

Plugins

Vim-wiki bindings

Publishing my notes:

nnoremap <F1> :terminal make serve<CR>
nnoremap <F2> :!make rsync_upload<CR>
nnoremap <F3> :!make commit_push<CR>
bindingaction
F1execute mdbook serve
F2publish to notes.vdwaa.nl
F3git commit and push

Standard vimwiki bindings:

bindingaction
<C-Space>toggle listitem on/off
gl*make the item before the cursor a list
<Tab>(insert mode) go next/create cell
+create/decorate links

vimwiki diary

bindingaction
wi
go to diary index
wwcreate a new diary entry
:VimwikiDiaryGenerateLinksupdate diary index

Fugitive bindings

bindingaction
<space>gagit add
<space>gsgit status
<space>gcgit commit
<space>gtgit commit (full path)
<space>gdgit diff (:Gdiff)
<space>gegit edit (:Gedit)
<space>grgit read (:Gread)
<space>gwgit write (:Gwrite)
<space>glgit log
<space>gpgit grep
<space>gmgit move
<space>gbgit branch
<space>gogit checkout
<space>gpsgit push
<space>gplgit pull

Ale bindings

bindingaction
gdGo to definition
grGo to references
gsSymbol search
KDisplay function/type info
gRRename variable/function

Commentary bindings

bindingaction
gcccomment out a line (takes a count)
gcapcomment out a paragaph

PKGBUILD

bindingaction
F1bump pkgrel
F2run updpkgsums

Rust

bindingaction
<Leader>bcargo test
<Leader>ccargo clippy
<Leader>xcargo run
<Leader>dset break point
<Leader>rrun debugger
F5start debugger
C-bcompile rust

FZF

bindingaction
FSearch all files
<space>gfGit Files
<space>ffSearch in files using the_silver_searcher
<space>ssList all snippets

Wishlist

  • Git integration
  • Snippets
  • Debugging
  • Language features: completion, find function definitions

GDB shortcuts

commanddescription
continuecontinue execution normally
finishcontinue executing until function returns
stepexecute next line of source code
nextexecute next line of source code, without descending into functions

Providing args:

gdb --args python example.py

Or in the gdb shell

args --config foo.toml

Printing variables:

print filename

print config.interval

Investigate

Neovim setup

Goals

LSP

For the LSP use neovim's native LSP server and neovim/nvim-lspconfig for configuration. Use :LspInfo to verify a language server is available and works for the file you are editing.

null-ls => :NullLsInfo

Linting

Completor

  • Git integration => tpope/vim-fugitive
  • Searching files => telescope
  • Smart commentor

TODO

  • https://github.com/numToStr/Comment.nvim
  • https://github.com/nvim-treesitter/nvim-treesitter-context
  • cmp (completor)
  • lsif

Resources

x86 tablet

Notes about using Arch / Gnome on an x86 tablet

To Do

  • Disable the broken webcam driver (atom-isp2) in the Arch kernel
  • No way to copypaste from osd/applications
  • No window controls with fingers in gnome
  • Loading gnome is a big slow, ~ 10-15 seconds (I/O?)
  • Try out the phosh compositor
  • Speakers emit a loud beep after a while, when playing a video (in firefox/chromium on npostart.nl or kodi)
  • Landscape mode does not work in gnome / panel => iio-sensor-proxy (add to gnome group?)
  • Hardware video decoding (mpv) (6263a231b3edabe651c64ab55be2a429b717ac9a in dotfiles)
  • Firefox does not support one finger scrolling, chromium does issue
  • Get bluetooth working, BCM4343A0.hcd this firmware

Gnome

  • intel-media-driver for hardware video acceleration
  • sof-firmware for audio
  • caribou? Or onboard for OSD keyboard
  • iio-sensor-proxy for screen orientation

Firefox one finger scrolling

cp /usr/share/applications/firefox.desktop ~/.local/share/applications/
vim ~/.local/share/applications/firefox.desktop

find the Exec line in the [Desktop Entry] section and change it to

Exec=env MOZ_USE_XINPUT2=1 /usr/lib/firefox/firefox %u

Apps

  • firefox does not support PWA's..
  • twitch => browser / kodi addon
  • youtube => export subscriptions as RSS feed (google takeout) https://www.youtube.com/feeds/videos.xml?channel_id=
  • npo.nl => browser
  • ziggo.tv => browser
  • video => kodi

How to join Twitch IRC w/ WeeChat

Taken from

WeeChat terminal IRC client

  • https://weechat.org

gen token

  1. acccess to "OAuth Password Generator"; semi-official service
  • https://twitchapps.com/tmi/
  • http://help.twitch.tv/customer/portal/articles/1302780-twitch-irc
  1. push "Connect to Twitch"
  2. copy oauth key
  • include "oauth:"
oauth:***

https://twitchapps.com/tmi/#access_token=***&scope=chat_login

reset/revoke

you must be keep "Twitch Chat OAuth Token Generator" connection

  • http://www.twitch.tv/settings/connections

if you push "Disconnect", so IRC connection unavailable; you have to need re-generate new oAuth key for join IRC

add server

replace TWITCH_NAME to your lowercase Twitch Name

/server add twitch irc.twitch.tv/6667 -password=oauth:*** -nicks=TWITCH_NAME -username=TWITCH_NAME

https://www.reddit.com/r/Twitch/comments/2uqews/anybody_here_using_weechat/

connect and join

/connect twitch
/join #CHANNEL_NAME

save settings

write settings to files

/save

exit/close

exit channel

/part #CHANNEL_NAME

close WeeChat

/quit

buffer

below commands/key very convenience when join 2 or more channels

/buffer list

move buffer-ring

Ctrl + n , Ctrl + p

close buffer

push Tab completion BUFFER_NAME

/buffer close BUFFER_NAME

window split

vertical and horizontal split

/window splitv
/window splith

move window

F7 , F8

undo split

/window merge

set membership (optional)

use for normal IRC client; get user list et al.

/set irc.server.twitch.command "/quote CAP REQ :twitch.tv/membership"

http://fogelholk.io/twitch-irc-joinsparts-with-weechat/ https://ter0.net/enable-userlist-in-weechat-for-twitch-tv-irc/

Linux research

namespaces

A namespace (NS) "wraps" some global system resource to provide isolation. Linux now supports multiple NS types, see namespaces(7):

namespacedescflag
Mount NSisolate mount point listCLONE_NEWNS
UTS NSisolate system identifiers (hostname / NIS domain nameCLONE_NEWUTS
IPC NSisolate system V IPC & POSIX MQ objectCLONE_NEWIPC
PID NSisolate PID number spaceCLONE_NEWPID
Network NSisolate network resources (network device, stack, portsCLONE_NEWNET
User NSisolate user ID and group ID number spacesCLONE_NEWUSER
Cgroup NSvirtualize (isolate) certain cgroup pathnamesCLONE_NEWCGROUP
Time NSisolate boot and monotonic clocksCLONE_NEWTIME

For each NS:

  • Multiple instances of a NS may exist on the system
  • At system boot, there is only one instance of each NS type (the initial namespace)
  • A process resides in one NS instance (of each NS)
  • A process inside NS instance only sees that instance type

Example UTS namespace, isolate two identifiers returned by uname(2):

  • nodename, (hostname) sethostname(2)
  • domainname, NIS domain name setdomainname(2)

Each UTS NS instance has it's own nodename and domainname

Each process has symlink files in /proc/PID/ns for every namespace for example /proc/PID/ns/time, the content can be read with readlink and has the form: ns-type: [magic-incode-#].

Namespaces API

Syscalls for NS:

  • clone(2) - create new (child) process in a new NS(s)
  • unshare(2) - create new NS(s) and move caller into it/them
  • setns(2) - move claling process to another (existing) NS instance

There are shell commands as well (from util-linux):

  • unshare(1) - create new NS and execute command in the NS(s)
  • nsenter(1) - enter existing NS and execute a command

Creating a new user namespace requires no privileges but all other namespaces required CAP_SYS_ADMIN privileges. Example:

$ sudo unshare -u bash
# hostname foobar
# hostname
foobar

User namespaces

Allow per namespace mappings of UIDs and GIDs processes, process's UIDs and GIDs inside NS may be different from outside NS. Process might have uid 0 inside the NS and nonzero UID outside. User NSs have a hierarchical relationship, parent of a user NS === user Ns of process that created this user NS. Parential relationship determines some rules about how capabilities work. When a new user NS is created, the first process in the NS has all capabilities that process has power of superuser only inside the user NS.

After creating a user NS defining a UID & GID mapping is done by writing to two files, /proc/PID/{uid_map,gid_map}. Records written to the map form ID-inside-ns ID-outside-ns length, ID-inside-ns and length define the range of IDs inside the user NS that are to be mapped. ID-outside-ns defines start of corresponding mapped range in "outside" user NS.

Example:

$ id
uid=1000(jelle)
$ unshare -U -r bash
usrns$ cat /proc/$$/uid_map
0 1000 1
usrns$ cat /proc/$$/gid_map
0 1000 1

Source:

  • https://man7.org/conf/meetup/understanding-user-namespaces--Google-Munich-Kerrisk-2019-10-25.pdf
  • https://lwn.net/Articles/531114/

containers

https://www.redhat.com/sysadmin/podman-inside-container https://developers.redhat.com/blog/2019/01/15/podman-managing-containers-pods

capabilities

cgroups

Sources:

  • https://lwn.net/Articles/604609/
  • https://lwn.net/Articles/679786/

eBPF

BPF (Berkely packet filter) developed in 1992, improved the performance of packate capture tools. In 2013, a major rewrite of BPF was proposed and included in the Linux kernel in 2014. Which turned BPF into a general purpose execution engine that can be used for a variety of things. BPF allows the kernel to run mini programs on system and application events, such as disk I/O. BPF can be considered a virtual machine due to its virtual instruction set executed by the Linux kernel BPF runtime which includes a runtime & JIT compiler for turning BPF instructions into native instructions for execution. BPF instructions must pass a verifier that checks for safety, ensuring it does not crash the kernel. BPF has three main uses in Linux: networking, observability & security.

Tracing is event based recording, such as strace, tcpdump.

Sampling take a subset of measurements to paint a coarse picture of the target, also known as profiling or creating a profile. For example, sample every 10 milliseconds, this has less overhead, but can miss events.

Observability is understanding a system through observation. Tools for this include, tracing, sampling and tools based on fixed counters. Does not include bencmark tools, which modify the state of the system. BPF tools are observability tools.

BCC (BPF Compiler Collection) is the first higher-level tracing framework developed for BPF.

Bpftrace a newer front end and that provides a special-purpose, high level language for developing BPF tools. BPFtrace is for one liners, BCC for compile scripts.

Workload characterization defines what workload is being applied.

Dynamic instrumentation (kprobes & uprobes)

A BPF tracing source, which can insert instrumentation points into live software, zero overhead when not in use, as software is unmodified. Often used to instrument start and end of kernel / application functions. Downside of dynamic tracing is that functions can be renamed (interface stability issue).

Example:

probedescription
kprobe:vfs_readinstrument beginning of kernel vfs_read()
kretprobe:vfs_readinstrument returns of kernel vfs_read()
uprobe:/bin/bash:readlineinstrument beginning of readline in /bin/bash
uretprobe:/bin/bash:readlineinstrument returns of readline in /bin/bash

Static instrumentation (tracepoints and UDST)

Static instrumentation is added by developers and user-level statically defined tracing (UDST) for userspace programs.

Example:

tracepointdescription
tracepoint:syscall:sys_enter_openinstrument open(2) syscall
udst:/usr/bin/mysqld:mysqld:query_statquery_stat probe

Listing all tracepoints matching sys_enter_open:

bpftrace -l 'tracepoint:syscalls:sys_enter_open*'

Or snoop on exec with execsnoop:

sudo /usr/share/bcc/tools/execsnoop

BPF Technology background

BPF was originally developed to offload packet filtering to kernel space for tcmpdump. This provided performance and safety benefits. The classic BPF used was very limited and only supported 2 registers versus 10, 32 bit registers width versus 64; in eBPF more storage options, 512 bytes of stack space and infinite "map" storage lastly supports more event targets. BPF is useful for performance tools as it is build into Linux, efficient and safe. BPF is more flexible then kernel modules, BPF programs are checked via a verifier before running and it supports more rich data structures via maps. It is also easier to learn as it doesn't require kernel build artifacts. BPF programs can be compiled once and run everywhere.

BPF programs can be written with llvm, BCC and bpftrace. BPf instructions can be viewed via bpftool and manipulate BPF objects including programs and maps.

BPF API

A BPF program can not call arbitrary kernel functions or read arbitrary memory, to accomplish this "helper" functions are provided as bpf_probe_read. Memory access for BPF is restricted to it's registers and the stack, bpf_probe_read can read arbitrary memory but it does some safety checks up front, it can also read userspace memory.

BPF Program Types

Program type specify the type of events that the BPF program attaches to in case of observability tools. The verifier uses the program type to restrict which kernel functions can be called and data structures to access.

BPF lacked concurrency until Linux 5.1, but tracing programs can't use it yet so a per CPU hash/map is used to keep track of event data and doesn't run into map overwrites or corruptions.

The BPF Type Format (BTF) is a metadata format that encodes debug information describing BPF programs, maps, etc. BTF is becoming a general purpose format for describing kernel data formats. Tracing tools require kernel headers installed to read / understand C structs otherwise they have to be defined in a BPF program.

BPF CO-RE (Compile Once, Run Everywhere)

Allow BPF programs to be compiled to BPF bytecode once and then packaged for other systems.

BPF sysfs interface

Linux 4.4 allows BPF programs and maps to be exposed over sysfs and allows the creation of persistent BPF programs to continue after the program that loaded them has exited. This is also called "pinning".

BPF limitations

  • Cannot call arbitrary kernel functions
  • No infinite loops allowed
  • Stack size limited to MAX_BPF_STACK (512)

Stack trace walking

Stack traces are used to understand the code paths that led to an event. BPF can record stack traces; framepointer based or ORC based stack walks.

Frame pointer based

The head of the linked list of stack frames can always be found in a register (RBP on x86_64) where the return is stored of a known offset (+8) from the RBP. The debugger just walks over the linked list from the RBP. GCC nowadaysdefaults to omitting the stack frame pointer and uses RBP as a general purpose register.

Debuginfo

Usually available via debug packages which contain debug files in DWARF format. Debug files are big and BPF does not support them.

LBR (Last Branch Record)

An Intel processor feature to record branches in a hardware buffer including function call branches. This has no overhead and limited in depth per processor from 4-32 branches which may not be enough.

ORC (Ooops Rewind Capability)

New debug format for stack frames, uses ELF sections (.orc_unwind, .orc_unwind_ip) and has been implemented in the Linux kernel.

Flamegraphs

Visualize stack traces, a stack backtrace or call trace. For example:

func_c
func_b
func_a

Where a calls b, calls c. All different call trees are recorded by how often a code path is taken for example:

func_e            
func_d            func_c
func_b   func_b   func_b
func_a   func_a   func_a
1        2        7


                       +---------+
		       # func_e  #
                       +---------+
  +------------------+ +---------+
  # func_c          #  # func_d  #
  +------------------+ +---------+
+--------------------------------+
# func_b                         #
+--------------------------------+
+--------------------------------+
# func_a                         #
+--------------------------------+

func_c uses 70% cpu time, func_e 10%.

Event sources

kprobes

Provide dynamic kernel instrumentation, can instrument any kernel function. When kretprobes are also used, function duration is also recorded. kprobes work by saving the target address and replacing it with a breakpoint instruction (int3 on x86_64) when instruction flow hits this breakpoint, the breakpoint handlers calls the kprobe handler afterwards the original instruction is executed. when kprobes are no longer needed the breakpoint is replaced by the original address. If ftrace already instruments the handler, ftrace simply calls the kprobe handler. When no longer used the ftrace kprobe handler is removed. For kretprobes, a kprobe entry is added to the function when called, the return address is saved and replaced with a "trampoline" function kretprobe_trampoline. When the function returns, CPU passes control to the trampoline function which calls the kretprobe hander. When no longer needed kprobe is removed.

This modifies kernel instruction text live, which means some functions are not allowed to be instrumented due to possible recursion. This does not work on ARM64 as kernel text is read only.

BPF can use kprobes via:

  • BCC - attack_kprobe & attach_kretprobee
  • bpftrace - krpboe & kretprobe
uprobes

User level dynamic instrumentation, same as kprobes but are file based, when a function in an executable is traced, all processes using that file now and in the future are traced.

BPF can use uprobes via:

  • BCC - attach_uprobe & attach_uretprobe
  • bpftrace - uprobe & uretprobe

tracepoints

Static kernel instrumentation, added by kernel developers as subsystem:eventname. Tracepoints work by at compile time adding an noop instruction (5 byte) on x86_64, can later be replaced with a jmp. A tracepoint handler trampoline is added to the end of the function which iterates over an array of registered tracepoint callbacks.

On enabling tracepoint, replace nop with jmp to tracepoint trampoline. Add an entry to the tracepoints callback array and sync RCU (read,copy,update). Removed drops array entry and if last rpelace the jmp with nop.

  • BCC: TRACEPOINT_PROBE
  • bpftrace: tracepoint probe type

BPF raw tracepoints (BPF_RAW_TRACEPOINT) creates a stable tracepoint without creating arguments so consumers have to handle raw arguments. It's a lot faster and allows consumers acccess to all arguments. The downside is that arguments might change.

UDST (User-level statically defined tracing)

Can be added by software via systemdtap-sclt-dev or facebook's folly which defines macros for instrumentation points.

PMC

Performance monitoring counters, programmable hardware counters on the processor. PMC modes:

  • counting - keep trac of rate of events (kernel reads).
  • overflow sampling - PMC sends interrupts to the kerne lfor the event they are monitoring.

Performance analysis

  • latency - how long to accomplish a request or operation (in ms)
  • rate -an operation or request rate per second
  • throughput - typically data movmenet in bits or bytes / sec
  • utilization - how busy a resource is over time as percentage
  • cost - the price / performance ratio

Workload characterization, understand the applied workload:

  • Who is causing the load? (PID, process, name, UID, IP Address)
  • Why is the load called? (code path, stack trace, flame graph)
  • What is the load? (IOPS, throughput, type)
  • How is the load changing over time? (pre-interval summary)

Drill down analysis

Examing a metric, finding ways to decompose into components, and so forth.

  1. Start examing the highest level
  2. Examine next level details
  3. Pick the most interesting breakdown or clue
  4. If problem is unsolved, go back to step 2

USE metrics, Utilization, resource, errors.

60 second analysis:

  • uptime - quick overview of load avg, three numbers are exponentially moving sum averages 1, 5, 15 minute constant
  • dmesg | tail - shows OOM, TCP dropping request issues
  • vmstat - virtual memory stats.
    • R - processes running on CPU waiting for a turn (does not include disk I/O): R > cpu count => saturation.
    • free - free memory in KBytes
    • si/so - swap in & out =. non-zero out of memory.
    • us, sy, id, wa and st: cputime on avg. across all cpu's, user, system time (kernel), idle, wait I/O and stolen time.
  • mpstat -P ALL 1 - per cpu time broken down in stats. CPU0 => 100% user time => single threaded bottleneck.
  • pidstat 1 - cpu usage per process rolling output.
  • iostat -xz 1 - Storage device I/O metrics.
    • r/s, w/s - delivered reads, writes to the device
    • await - time spend waiting on I/O compeltion in ms
    • aqu_sz - average number of requests issued to the device. > 1 can indicate saturation.
    • %util - device utlization (busy %) > 60% usually means poor performance
  • free -m - available memory not zero
  • sar -n DEV 1 network device metrics
  • sar -n TCP,ECTP 1 TCP metrics & errors:
    • active/s - number of locally initiated TCP connections / sec
    • passive/s - numbe of remotly initiated TCP connections / sec
    • rtrans/s - number of retransmits / sec

BCC Tool checklist

  • execsnoop - shows new process execution by printing one line of output for every execve(2)
    • look for short lived processes often not seen by normal tools
  • opensoop - prints one line of output for each open(2)
    • ERR colomn shows files failed to open
  • ext4slower - traces common operations from ext4 fs (reads,writes, opens, syncs) and prints those that exceed the limit (10ms)
  • biolatency - traces disk I/O latency (time from device => completion) shown as histogram
  • biosnoop - prints a line of output for each disk I/O with details including latency
  • cachestat - prints a one line summary every second showing stats fro mthe FS cache
  • tcpconnect - prints one line of output for every active TCP connection (connect)
  • tcpaccept - prints one line of output for every passive TCP connection (accept)
  • tcpretrans - prints one line of output for every TCP retransmit package
  • runqlat - times how long threads were waiting for their turn on CPU. Longer than expected waits for CPU access can be identified.
  • profile - CPU profiler, a tool to understand which code paths are consuming CPU resources. It takes samples of stac ktraces at timed intervals and prints an summary of unqiue stack traces + count.

Debugger

https://github.com/dylandreimerink/edb

Sources:

  • https://lwn.net/Articles/740157/
  • https://docs.cilium.io/en/v1.8/bpf/
  • https://www.kernel.org/doc/html/latest/bpf/index.html
  • BPF Performance Tools

io_uring

https://lwn.net/Articles/776703/ https://lwn.net/Articles/847951/ https://lwn.net/Articles/803070/ https://lwn.net/Articles/815491/ https://lwn.net/Articles/858023/ https://lwn.net/Articles/810414/

Rebuilds in CI

Goal: Test rebuilds for example for a new Python

Potential workflow:

pkgctl repo clone python
git checkout -b python-3.12
pkgctl build --repo $temp?

Every package in a todolist will be rebuild against this repo, optionall checking out a python-3.12 branch if it exists. Needs either pkgctl support or special casing in CI.

AURWeb

Staging

  • test data
  • deploying staging env

MySQL

Setup

Document LXD setup

Javascript

  • Replace typeahead

Python port

Templates

Translations

{% trans %}String{% endtrans %}

{{ "%sFoo Bar%s"
   | tr
   | format("arg1", "arg2")
   | safe
}}

Testing

Performance

  • RPC API

Benchmarking

  • oha
  • log mysql queries

Prometheus monitoring

Metrics:

  • avg response time
  • 95% percentile

Postgresql

Upgrades

Changes for upgrade to the Python port.

Traceback (most recent call last):
  File "/usr/lib/python3.9/configparser.py", line 789, in get
    value = d[option]
  File "/usr/lib/python3.9/collections/__init__.py", line 941, in __getitem__
    return self.__missing__(key)            # support subclasses that define __missing__
  File "/usr/lib/python3.9/collections/__init__.py", line 933, in __missing__
    raise KeyError(key)
KeyError: 'aurwebdir'
[jelle@aurweb.lxd][/srv/http/aurweb]%AUR_CONFIG=/etc/aurweb/config python -m aurweb.spawn
-------------------------------------------------------------------------------------------------------------------------------
Spawing PHP and FastAPI, then nginx as a reverse proxy.
Check out https://aur.archlinux.org
Hit ^C to terminate everything.
-------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/srv/http/aurweb/aurweb/spawn.py", line 171, in <module>
    start()
  File "/srv/http/aurweb/aurweb/spawn.py", line 109, in start
    php_address = aurweb.config.get("php", "bind_address")
  File "/srv/http/aurweb/aurweb/config.py", line 40, in get
    return _get_parser().get(section, option)
  File "/usr/lib/python3.9/configparser.py", line 781, in get
    d = self._unify_values(section, vars)
  File "/usr/lib/python3.9/configparser.py", line 1149, in _unify_values
    raise NoSectionError(section) from None
configparser.NoSectionError: No section: 'php'

pytest-pacman

export PYTHONPATH=/home/jelle/projects/pytest-pacman:build/lib.linux-x86_64-3.9:.
PYTEST_PLUGINS=pytest_pacman.plugin pytest --fixtures

table view

Drop jQuery tablesorter

https://www.kryogenix.org/code/browser/sorttable/sorttable.js

archweb

  • archweb repository security status for packages in dev dashboards
  • mirror signup form? Gitlab
  • dark theme / css
  • json output for dashboards for a Rust arch-package-status command!!

Dark mode

https://sparanoid.com/note/css-variables-guide/ https://lea.verou.me/2021/03/inverted-lightness-variables/ https://codesalad.dev/blog/color-manipulation-with-css-variables-and-hsl-16

Big improvements

  • Mirror monitoring reminder emails
  • Keycloak SSO
  • Upstream SASS files
  • Rest API

Small things

  • todolist - add note support from staff (UX?)
  • todolist - add /todo/json endpoint and filter on status
  • detect untrusted / signed packages in archweb for example with zorun (old repo db)
  • performance stale relations
  • django performance
  • rebuilderd-status tests -> mock requests

kuse arch-common-style with SASS

  • django-sass
  • django-compressor?

Hyperkitty uses SASS https://gitlab.com/mailman/hyperkitty/-/blob/master/hyperkitty.spec

https://ronald.ink/using-sass-django/ https://terencelucasyap.com/using-sass-django/ https://github.com/jrief/django-sass-processor https://github.com/django-compressor/django-compressor/ https://github.com/torchbox/django-libsass https://www.accordbox.com/blog/how-use-scss-sass-your-django-project-python-way/

Mirror out of date

https://github.com/archlinux/archweb/issues/142

Create a new page with a list of out of date mirrors with a button for mirror maintainers to send an email. With a different template per issue:

Keycloak

TODO

  • Test groups
  • Test updating/changing groups and relogging in
  • Syncing groups/users periodicially
  • Used the sso_accountid anywhere? Read OIDC docs about it / what happens when email changes in keycloak
  • Test JavaScript XHR actions with OIDC
  • do we implement filter_users_by_claims https://mozilla-django-oidc.readthedocs.io/en/stable/installation.html#connecting-oidc-user-identities-to-django-users
  • Hide password change logic from developer profile
  • Test Deny access for non Staff
  • Fix logout, not logging out of keycloak if that is desirable
  • Test new TU user login
  • The "Release Engineering" group is obsolete in archweb
  • Import sub ids for existing staff into archweb
  • Add Release Maintainers to Keycloak and add the logic for it
  • Onboard active testers to Keycloak, remove old testers
  • Move ex-developers/trusted users/staff to the retired group

Sync users from Keycloak

Most likely we want to create a new openid client which has "realm-management roles" such as "query-groups, query-users, view-users" and can periodically auth and sync keycloak-sync https://www.keycloak.org/docs/latest/server_admin/#_service_accounts https://github.com/marcospereirampj/python-keycloak

Blocking bugs

  • It's broken with latest requests: https://github.com/marcospereirampj/python-keycloak/issues/196
  • Document service admin example: https://github.com/marcospereirampj/python-keycloak/issues/141
  • Keycloak Rest API https://www.keycloak.org/docs-api/6.0/rest-api/index.html#_groups_resource

Self signed certificate issues with virtualenv

Fucking certifi not using the system CA bundle

# Your TLS certificates directory (Debian like)
export SSL_CERT_DIR=/etc/ssl/certs
# CA bundle PATH (Debian like again)
export CA_BUNDLE_PATH="${SSL_CERT_DIR}/ca-certificates.crt"
# If you have a virtualenv:
. ./.venv/bin/activate
# Get the current certifi CA bundle
CERTFI_PATH=`python -c 'import certifi; print(certifi.where())'`

test -L $CERTFI_PATH || rm $CERTFI_PATH
test -L $CERTFI_PATH || ln -s $CA_BUNDLE_PATH $CERTFI_PATH

Invalid redirect uri generated by archweb.. not https but http...

requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://keycloak.lxd/auth/realms/archlinux/protocol/openid-connect/token

Configuration issue with using ./manage.py and resolved by setting SECURE_PROXY_SSL_HEADER.

Devel queries

  • /devel for flagged testing if the package is in testing in_testing() is run for every "My Flagged Packages"
  • /packages/stale_relations PackageRelation.last_update is called for every package doing one query - 140 queries in 2500 ms.

inactive_users

1200 ms -> removing

  • Fix relation.get_associated_packages for all inactive user realtions, they trigger a query like: return Package.objects.normal().filter(pkgbase=self.pkgbase)

webseeds

We should be able to support webseeds again in magnets

magnet uri scheme webseeds

wrong permissions

34 times calling for wrong_permissions

  • Fix relation.get_associated_packages for all stale_relations, they trigger a query like: return Package.objects.normal().filter(pkgbase=self.pkgbase)
<td class="wrap">{{ relation.user.userprofile.allowed_repos.all|join:", " }}</td>
<td class="wrap">{{ relation.repositories|join:", " }}</td>

Calls for pagination.. for everything

  • Inactive User Relations
  • Non-existant pkgbases
  • Maintainers with Wrong Permissions

98 similiar queries: SELECT ••• FROM "packages" INNER JOIN "repos" ON ("packages"."repo_id" = "repos"."id") INNER JOIN "arches" ON ("packages"."arch_id" = "arches"."id") WHERE "packages"."pkgbase" = 'libg15render' ORDER BY "packages"."pkgname" ASC

arch common styles

Make the navbar menu resizable

Rest API

  • Token auth for permission related requests
  • Pagination
  • Signoffs
  • Search with multiple inputs (packages)
  • Todo
  • Packages
  • Reports

django-rest-framework graphene-django django-graph-api django-restsql

Python packaging

Package: python-bootstrap-evil

Major release

Arch's Python modules store the version number in the module path meaning they won't be picked up by a new Python release for example 3.11 => 3.12.

  • bump Python and rebuild it in a separate branch
  • bootstrap
  • find incompatible packages upfront?

Rebuild order

The python package repo has a script called genrebuild this should include all packages required for the rebuild:

Figuring out the order:

./genrebuild > rebuild-list.txt
cat rebuild-list.txt | xargs expac -Sv %n | sort | uniq > final.txt

For some reason our files.db include old packages which are no longer in the repos, arch-rebuild-order hard fails on missing packages so we clean those out with an ugly expac hack.

We can use arch-rebuild-order, it does not handle cyclic depenendencies but should be good enough (tm):

arch-rebuild-order --no-reverse-depends $(cat ./final.txt)

Python bootstrapping

Custom repository:

https://pkgbuild.com/~jelle/python3.11

cp /usr/share/devtools/pacman-staging.conf /usr/share/devtools/pacman-python.conf

Edit the config file and add above [staging]

[python]
SigLevel = Optional
Server = https://pkgbuild.com/~jelle/python3.11
sudo ln -s /usr/bin/archbuild /usr/bin/python-x86_64-build
repo-add python.db.tar.gz *.pkg.tar.zst
sudo python-x86_64-build --  -- --nocheck

Bootstrappping

  1. First build python-bootstrap (from svn-packages) with Python 3.X
  2. Yeet the packages into a pacman repository
  3. Build flit-core with bootstrapped build and installer
  4. Build python-installer comment out the sphinx build and repo-add it
  5. Build python-packaging (requires build,installer,flit-core). HACK: PYTHONPATH=src python -m build -nw required by python-build!
  6. Build python-build comment out the sphinx build and repo-add it
  7. Build python-pyproject-hooks and repo-add it
  8. build python-jaraco.text (requirement for bootstrap build of setuptools)
  9. build python-setuptools => bootstrap python-jaraco.text and tons more...
  10. Or build python-setuptools with export PYTHONPATH=/usr/lib/python3.10/site-packages/
  11. Wheel needs jaraco.functools and shit..

Upower

https://gitlab.freedesktop.org/upower/upower/-/merge_requests/49 https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/1135 https://gitlab.gnome.org/Teams/Design/settings-mockups/-/blob/master/power/power.png https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/1461 https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/1461

Battery charge limits / profiles

Hacking

meson setup --prefix /tmp --libexecdir lib --sbindir bin --reconfigure build
meson compile -C build
  • Figure out how to read udev env variables in upower
  • Check if the steamdeck supports battery charge limits
  • ChargeLimitEnabled property which also can be set by the client, and monitored for changes
  • Save the state if battery limitting is enabled in /var/lib/upower/battery_saving as the embedded controller, might not save the start/stop threshold and resetting the bios/battery at 0% might reset it.
  • Figure out how to expose the dbus option, so a property
  • LG/Asus/Toshiba only have end charge limits
  • Ask the Valve guys about power charge limitting interests (likely not due to gamescope / KDE) can it be done in firmware
  • Hack control-center to read UPower properties and setting
  • Investigate Surface BIOS it supports setting Enable Battery Limit Mode, which limits charging to 80%.
  • When upower sets the charge limits it should read then back as not all hardware support arbitrary percentages. FIXME? Required?
  • Mail Arvid Norlander lkml@vorpal.se if Toshiba laptops have a start limit
  • leaf requies GNOME to know what is up (Battery status). Depends on Alan / GNOME Design descision
  • borrow framework and write EC charge_control_support
  • wCPO Asus https://www.asus.com/us/support/FAQ/1032726/ suggests 58/60%
  • LG only has 80 and 100 as end charge limit
  • Toshiba only has 80 and 100 as end charge limit
  • Extend the kernel API to show valid charge options?
  • Add an allowed values? sysfs entry for charge_control_end_limits [80 100] => send a RFC to the mailing list
  • Add documentation for the LG/ASUS/Toshiba stop thresholds special cases
  • Ask Dell about exposting charge_control_*_threshold's
  • Framework driver https://github.com/DHowett/framework-laptop-kmod

Gnome Settings

busctl set-property org.freedesktop.UPower /org/freedesktop/UPower/devices/battery_BAT0 org.freedesktop.UPower.Device  ChargeThresholdEnabled b true

UPower Git issues

  • On startup charge-threshold-supported is FALSE, after I toggle a dbus setting we re-read it and it's true this should be read on startup!!!
  • Implement the switch functionality, setting the DBus property
  • Removes lid handling 07565ef6a1aa4a115f8ce51e259e408edbaed4cc as systemd does it? What should gnome do? https://gitlab.freedesktop.org/upower/upower/-/merge_requests/5#note_540149
$ busctl get-property org.freedesktop.login1 /org/freedesktop/login1 org.freedesktop.login1.Manager LidClosed
b false
15:50:57 hansg | Hmm, mutter depend ook op upower for LID change monitoring, maar alleen via DBUS. Dus ik zou zeggen reverten met de change die LID support dropped van upower voor nu. Tot dat
               | er een alternatief is.  (alternatief is waarschijnlijk LID-switch support toevoegen aan libinput en dan mutter dat laten gebruiken en gnome-control-center het aan mutter laten
               | vragen ..,)
  • Unrelated, but we could easily implement a Linger property changed Just emit here?
static int method_set_user_linger(sd_bus_message *message, void *userdata, sd_bus_error *error) {
        _cleanup_(sd_bus_creds_unrefp) sd_bus_creds *creds = NULL;
  • Handle get_devices changes => REVERT
  • How do we get the current charge levels into the translated text label?

GTK tutorial/intro

Follow up

  • framework charge_control settings
  • multiple USB keyboard's with backlight laptop + USB
  • Dell privacy screen switch state to libinput to mutter (ask Hans for hardware)
  • USB re-pluggable usb keyboard backlight support in upower (ask Hans for hardware)
  • Steam Deck enhancements? https://gitlab.freedesktop.org/upower/upower/-/issues/245

meson warnings

Build targets in project: 34
NOTICE: Future-deprecated features used:
 * 0.60.0: {'install_subdir with empty directory'}
 * 1.1.0: {'"boolean option" keyword argument "value" of type str'}

Battery charge limit

New idea, use hwdb for profiles

Match if /sys/class/power_supply/*

/etc/udev/rules.d/60-battery.rules

ACTION=="remove", GOTO="battery_end"

# sensor:<model_name>:dmi:<dmi pattern>

SUBSYSTEM=="power_supply", KERNEL=="BAT*", \
  IMPORT{builtin}="hwdb 'battery:$attr{model_name}:$attr{[dmi/id]modalias}'", \
  GOTO="battery_end"

LABEL="battery_end"

/etc/udev/hwdb.d/60-battery.rules

battery:*:dmi:*
 CHARGE_LIMIT=docked;20;60

battery:*:dmi:*T14s*
 CHARGE_LIMIT=docked;50;80

Like /usr/lib/udev/hwdb.d/60-sensor.hwdb

Testing is done with udevadm

udevadm test /sys/class/power_supply/BAT0

Hard loading

sudo systemd-hwdb update --strict || echo 'could not parse succesfully'
sudo udevadm trigger -v -p /sys/class/power_supply/BAT0

udevadm info -q all /sys/class/power_supply/BAT0
  • multi battery laptop, we should also allow match on "BAT*"
  • systemd PR for hwdb.d/parse_hwdb.py ?!

To add local entries, create a new file /etc/udev/hwdb.d/61-battery-local.hwdb

systemd-hwdb update
udevadm trigger -v -p DEVNAME=/dev/iio:deviceXXX
[docked]
start=50
stop=80

[travel]
start=100
end=100

[conservative]
start=20
end=60

/etc/upower/battery.d/20-lenovo

[docked]
start=30
end=90
dmi=T14sGen1 & T14S

Multiple batteries

[lenovo-bat0]
start=30
end=90
dmi=Lenovo
battery=BAT0

[lenovo-bat1]
start=50
end=90
dmi=Lenovo
battery=BAT1

Where dmi is a glob match on /sys/class/dmi/id/modalias so *T14sGen1*

battery is an entry in /sys/class/power_supply

DBus API

a{s} of battery limit profiles

  • Enable()
  • Supported Property?
  • Start Property
  • End Property

Caveats

  • Not all laptops/hardware supports the same settings so setting on 80 might be 90.

Enum

To not show too many profiles at once we should maybe just support an Enum of modes and every profile has a mode entry and in theory we could "extend" this in the future.

  • low power
  • docked
  • travel
  • conservative?

Steamdeck

https://0x0.st/HGAi.sh

Supports it through a non-standard knob max_battery_charge_level, kernel source and driver code.

  • how do I write a proper driver which uses charge_control_end_threshold
  • how do other drivers do this?
  • how does this get to power_supply?
  • how is the power_supply class created?

Framework

Forum Post about the EC with charge limit mention Blog post about EC and charge limit Official framework EC code Charge limit code? Charge controller chip Datasheet

Setting min charge threshold

  • Contact dhowett about mainlining the kernel driver

No min. percentage available Does it require just setting SYSTEM_BBRAM_IDX_CHG_MIN to support? Does hardware understand that? seems _MIN is totally not supported?

The embedded controller is a MEC1701, but what is the charge control chip?

Kernel Driver

Some patches have been submitted and merged to support the Framework cros_ec LPC driver. However Framework extended the Chromebook? EC code with charge control which is not merged into the kernel

So our kernel driver should use the EC controller and obtain a reference to it somehow, a driver that interacts with the chromeOS EC is

drivers/power/supply/cros_usbpd-charger.c

See for example a function which calls an EC command:

static int cros_usbpd_charger_ec_command(struct charger_data *charger,

For a charge limit driver we likely need to write a driver similiar to msi-ec.c in drivers/platform/x86, something like drivers/platform/x86/framework-ec.c

Questions:

  • How do we bind the EC controller? And do we need too?
  • It's a module_platform_driver, how does that determine when it needs to be loaded or is this some DeviceTree thing? The cros_usbpd driver does it like this, but how does that work?
static int cros_usbpd_charger_probe(struct platform_device *pd)
{
	struct cros_ec_dev *ec_dev = dev_get_drvdata(pd->dev.parent);
	struct cros_ec_device *ec_device = ec_dev->ec_dev;

wmi_ec uses ec_read which in turn calls acpi_ec_read. The driver binds based on DMI strings.

  • What should the framework-ec driver use?

Mainline OpenRazer power bits so upower works out of the box

A user made a bug report to support openrazer's sysfs attributes for charging/battery readout. This driver sadly exports custom sysfs attributes while it should implement a power_supply such as the logitech hid-logitech-hidpp.c driver which upower can automatically pick up

Charging bits are read here

  • Obtain hardware with an USB dongle, bluetooth might work out of the box?
  • How does one implement a power_supply driver?
  • Contact upstream about mainlining, note that the driver exposes far more stuff over sysfs which likely can't be mainlined nor do I have interest in.

Battery Calibration

After Charge limits we should consider working on battery calibration. To inhibit the system, we can use a systemd dbus call just like cc-color-calibrate does in gnome-control-center.

UPower instead should probably talk to systemd to to inhibit and it works as following:

UPower adds:

  • BAT0->Calibrate()
  • BAT0->IsCalibrating = bool

User calls BAT0->Calibrate() We set IsCalibrating = true We inhibit the current session https://www.freedesktop.org/wiki/Software/systemd/inhibit/ We disallow changing the charge limits We disable charge limits We 'full-discharge' to /sys/class/power_supply/BAT0/charge_behaviour We keep track of where we are, so discharge X% => 0% and then 0% => 100% is a full calibration Once completed we set isCalibrating to False.

BTRFS Support

Cockpit btrfs support, initial read only support has landed with create/delete subvolume support on the way. Some missing features in general are:

  • UDisks improvements
  • Further tests
  • Resize/Grow support
  • Multi device support
    • Adding a new device to a filesystem
    • Remove a device from a filesystem
  • Volume creation support
    • RAID1/RAID5 etc.
  • Snapshots support
  • Quota support
  • Robistifcation of libblockdev

UDisks Improvements

  • use udisks for listing subvolumes (GetSubvolumes) currently does not work well for us due to MountPoints notes
  • use udisks CreateSubvolume, issue (does not work as it always selects the first mountpoint)
  • use udisks CreateSnapshot, issue (does not work as it always selects the first mountpoint)
  • use udisks DeleteSubvolume issue (does not work as it always selects the first mountpoint and no recursive removal support)
  • reproduce the issue below and create a good bug report

Bugs

Repeatedly adding/removing a device to a volume either loses an udev event or udisks does not pick up a udev event.

btrfs device add /dev/mapper/vgroup0-lvol1 /mnt/data; date
btrfs device remove /dev/mapper/vgroup0-lvol1 /mnt/data; date

This generates a udev event, it's udisks which no longer knows!

Further tests

Test setting a different default subvolume in btrfs and see how Cockpit handles this.

Resize / Grow support

Should be exposed by UDisks, see dbus docs.

Multi device support

CreateVolume exists in UDisks and should be implemented like LVM in Cockpit. It works a bit different then LVM in that metadata and data can have different raid profiles.

Finding out what multi device profile was selected can only be done via:

btrfs --format json filesystem df $mountpoint

For options see mkfs.btrfs --help

A device can be added or removed with AddDevice and RemoveDevice but currently we can't detect when a drive is missing or obtain health stats from the "array".

UDisks does not know about missing devices see, btrfs filesystem show does but it is hard to parse:

echo >/sys/block/sdx/device/delete

Label: 'fedora-test'  uuid: cece4dd8-6168-4c88-a4a8-f7c51ed4f82b
Total devices 3 FS bytes used 2.08GiB
devid    1 size 11.92GiB used 3.56GiB path /dev/vda5
devid    2 size 0 used 0 path /dev/sda MISSING
devid    3 size 512.00MiB used 0.00B path /dev/sdc

In LVM in UDisks this is shows as VolumeGroup => MissingPhysicalVolumes readable as

  • Teach libblockdev to expose the data and metadata profile
    • Expand libblockdev with filesystem information using btrfs --format json filesystem $mountpoint
  • Teach udisks to expose missing disks
    • Expand libblockdev BDBtrfsDeviceInfo with missing bool field
    • Teach libblockdev bd_btrfs_list_devices to detect missing devices
  • Read up if btrfs exposes any health stats about the array and if it was synced
  • Research if an array needs to be balanced when a device is added as this does not happen automatically
    • Implement btrfs balance in libblockdev
    • Expose balance start/end in UDisks, requires a job API

libbtrfsutil

C test program

gcc -O0 -ggdb -I /home/jelle/projects/btrfs-progs -L /home/jelle/projects/btrfs-progs -lbtrfsutil test.c
LD_LIBRARY_PATH=/home/jelle/projects/btrfs-progs ./a.out

Running build Python module:

LD_LIBRARY_PATH=/home/jelle/projects/btrfs-progs  PYTHONPATH=libbtrfsutil/python valgrind --leak-check=full  python3 test.py

Snapshots support

We currently list all subvolumes, so also snapshots. Cockpit should display snapshots and regular subvolumes different and also check if they are readonly or not.

  • Listing snapshots different, how do we identify if it is a snapshot can we only differentiate between btrfs subvolume list -s and without?
  • Snapshot creation
    • Extend the create subvolume or create a new menu entry for Create snapshot
    • Add option to create a readonly snapshot
    • can't use UDisks for this as it suffers from the same issue as CreateSubvolume (getting the first mount point)
  • Snapshot deletion - should be the same as a normal subvolume removal, just needs tests

Quota support

  • Learn about quotas
  • Expose quotas via libblockdev
    • Create quota group support
    • Create subvolume allow setting quotas

Robistifcation of libblockdev

Use libbtrfsutil where possible instead of shelling out btrfs.

  • Port create/delete subvolume to libbtrfsutil
    • Use btrfs_util_delete_subvolume
    • Extend delete with a new flag for BTRFS_UTIL_DELETE_SUBVOLUME_RECURSIVE
    • Use btrfs_util_create_subvolume
  • Port create_snapshot to libbtrfsutil
  • Port listing subvolumes to libbtrfsutil
  • libbtrfsutil extending
    • extend libbtrfsutil with per device or volume information like btrfs filesystem show and/or data/metadata
    • add addDevice/removeDevice support if allowed
    • add createVolume support if allowed
    • setLabel support

Old notes

Subvolumes

Just directories under a subvolume

default subvolume id [5] is the ultimate root of every btrfs filesystem

Usually mounted as:

UUID=a280b604-6023-4ba5-bb9e-80d612f84b0d /home btrfs subvol=home,compress=zstd:1 0 0

A proper subvolume has always inode number 256. If a subvolume is nested and then a snapshot is taken, then the cloned directory entry representing the subvolume becomes empty and the inode has number 2.

Udisks

Snapshots

  • How to create them? btrfs subvolume snapshot $subvolume $target

  • How to mount them?

  • How to identify them? Snapshots are basically subvolumes but with initial contents

  • Different types of snapshots? btrfs has read only and read/write snapshots

    Then be set on creation with -r or with a property btrfs property set /root/@snapshots/6oct-1 ro true

  • How do we identify a rw/readonly snapshot btrfs property get /root/@snapshots/6oct-1 ro

multiple disks

  • Should cockpit balance for you? (udisks does not)
  • What modes should we offer? raid0/raid1/raid01?

PCP

Performance Co-Pilot provides historical system metrics. PCP stores metrics in archives, in /var/log/pcp/pmlogger/$(hostname).

All metrics are identified by an PMID (Performance Metric identifier) Each metric is part of a certain domain typedef unsigned long pmInDom; except for single value instances those are always PM_INDOM_NULL.

Examle multi value metric (instances):

$ pminfo -f filesys.free

filesys.free
    inst [0 or "/dev/mapper/system"] value 472018336
    inst [1 or "/dev/nvme0n1p1"] value 371764

Single value metric:

$ pminfo -f mem.freemem

mem.freemem
    value 3015252

Obtaining the metrics from archive is used done creating a "handle" with pmNewContext. The collection time can be set to an arbitrary time with pmSetMode. The to be fetched instances can be restricted with pmAddProfile and pmDelProfile.

Performance metric description

Metadata of a metric described in pmDesc struct describes the format and semantics.

/* Performance Metric Descriptor */
typedef struct {
    pmID    pmid;   /* unique identifier */
    int     type;   /* base data type (see below) */
    pmInDom indom;  /* instance domain */
    int     sem;    /* semantics of value (see below) */
    pmUnits units;  /* dimension and units (see below) */
} pmDesc;

The types

/* pmDesc.type - data type of metric values */
#define PM_TYPE_NOSUPPORT -1   /* not in this version */
#define PM_TYPE_32        0    /* 32-bit signed integer */
#define PM_TYPE_U32       1    /* 32-bit unsigned integer */
#define PM_TYPE_64        2    /* 64-bit signed integer */
#define PM_TYPE_U64       3    /* 64-bit unsigned integer */
#define PM_TYPE_FLOAT     4    /* 32-bit floating point */
#define PM_TYPE_DOUBLE    5    /* 64-bit floating point */
#define PM_TYPE_STRING    6    /* array of char */
#define PM_TYPE_AGGREGATE 7    /* arbitrary binary data */
#define PM_TYPE_AGGREGATE_STATIC 8 /* static pointer to aggregate */
#define PM_TYPE_EVENT     9    /* packed pmEventArray */
#define PM_TYPE_UNKNOWN   255  /* used in pmValueBlock not pmDesc */

Cockpit-pcp does not support PM_TYPE_AGGREGRATE, PM_TYPE_EVENT

Semantics describe how Cockpit should represent the data:

/* pmDesc.sem - semantics of metric values */
#define PM_SEM_COUNTER  1  /* cumulative count, monotonic increasing */
#define PM_SEM_INSTANT  3  /* instantaneous value continuous domain */
#define PM_SEM_DISCRETE 4  /* instantaneous value discrete domain */

The C code doesn't do anything with this information except return it back to the client in the meta message. However the derive == rate option requires the bridge to calculate the sample rate based on the last value and the provided interval.

PCP Archive source

The metrics1 channel supports passing a source=pcp-archive or source=/path/to/archive, the latter likely introduced for testing. Archive specific options from docs/protocol.md:

  • "metrics" (array): Descriptions of the metrics to use. See below.

  • "instances" (array of strings, optional): When specified, only the listed instances are included in the reported samples.

  • "omit-instances" (array of strings, optional): When specified, the listed instances are omitted from the reported samples. Only one of "instances" and "omit-instances" can be specified.

  • "interval" (number, optional): The sample interval in milliseconds. Defaults to 1000.

  • "timestamp" (number, optional): The desired time of the first sample. This is only used when accessing archives of samples.

    This is either the number of milliseconds since the epoch, or (when negative) the number of milliseconds in the past.

    The first sample will be from a time not earlier than this timestamp, but it might be from a much later time.

  • "limit" (number, optional): The number of samples to return. This is only used when accessing an archive.

Reading data from archive

# Obtain an archive, this can be multiple if a path is given to say /var/log/pcp/pmlogger/hostname
context = pmapi.pmContext(c_api.PM_CONTEXT_ARCHIVE, '/path/to/archive')

# Get the internal metric ids for the user provided metrics
pmids = context.pmLookupName('mock.value')

# Get the descriptions, this is used for scaling values if required
descs = context.pmLookupDescs(pmids)

results = context.pmFetch(pmids)
for i in range(results.contents.numpmid):
    atom = context.pmExtractValue(results.contents.get_valfmt(i),
                                  results.contents.get_vlist(i, 0),
                                  descs[0].contents.type,
                                  c_api.PM_TYPE_U32)
    print(f"#mock.value={atom.ul}")	

Unit tests

  • Test limitting the data, so generate a 1000 record archive (limit option in the metrics1 channel)
  • Different types of data, currently only testing U32. Cockpit requests "kernel.all.cpu.nice" (with a derive: "rate"), "mem.physmem", "swap.pagesout",
  • Test omit-instances { name: "network.interface.total.bytes", derive: "rate", "omit-instances": ["lo"] }
  • Test multi value metrics (which have "instances" like network.interface.total.bytes)
  • Test passing instances
  • Test sample interval changes

Questions

  • Why do we need to read archive per archive? The API supports reading all for us.
    • Is it be of error handling?
    • Is it because of limitting
    • Is it because of the start timestamp?

References

Programming PCP