02 May 2024

feedPlanet Python

Programiz: Getting Started with Python

In this tutorial, you will learn to write your first Python program.

02 May 2024 4:47am GMT

01 May 2024

feedPlanet Python

Tryton News: Tryton Release 7.2

We are proud to announce the 7.2 release of Tryton.
This release provides many bug fixes, performance improvements and some fine tuning. It also adds 5 new modules.
You can give it a try on the demo server, use the docker image or download it here.
As usual upgrading from previous series is fully supported but some manual steps are needed to update from 7.0 to 7.2.

Here is a list of the most noticeable changes:

Changes for the User

Clients

You can now request to reset your password from the login dialog. Doing this sends a temporary password to your email address.

The PYSON widgets display the value using operators which are more user-friendly.

Web Client

The binary and image widgets now support drag and drop to set their value.

Desktop Client

On list and tree views, there is now a contextual menu that allows you to copy the contents of a cell or a column.

Accounting

It is now possible to modify the dates of a period even if it contains posted moves as long as the existing moves stay inside the new period dates. This useful to correct mistakes or even extend a period.

A warning is now raised when you validate an invoice for which some lines do not have the expected default taxes. This helps to detect mistakes.

When an invoice in another currency is paid, the currency exchange amount is now booked automatically into a configured account.

You can now enter the amount of the transaction in a second currency on statements. This makes it easier to do the reconciliation between the statement and invoices based on a second currency.

Company

Employees are now automatically deactivated once their end date has passed.

It is now possible to use some placeholders in the header and footer of company reports like the company name, phone, website etc.

Marketing

Some reports are now available on marketing scenario and activities. They calculate and display the open, click and click-through rates.

UTM parameters can be added to marketing emails so you can follow their results.

Product

You can now store the Manufacturer Part Number and brand as a product identifier.

Tryton now supports to adding images to product categories.

You can now use non-square images on products. The module resizes the images to fit the requested size but keeps the aspect ratio.

Production

The production number is now only set when the order progresses to waiting. This prevents the supply module from consuming number for production request that are subsequently deleted.

Purchase

It is now possible to remove ignored invoices and stock moves from purchases. This is useful when you have ignored the invoice or shipping exception by mistake and need to correct it.

Sale

It is now possible to remove the ignored invoices and stock moves from sales. This is useful when you have ignored the invoice or shipping exception by mistake and need to correct it.

The product on sale opportunity lines can be omitted, a description and a note can be used instead.

Stock

The drop shipment (like the other shipments) can now be split. This is useful to match exactly how the supplier shipped the products.

The shipment numbers are now only set when it progresses to a waiting state. This prevents consuming sequences numbers for requests that are going to be deleted.

The lot trace now optionally displays the source and destination locations. This can be useful when investigating the traceability of a lot.

Web Shop

It is now possible to limit a web shop by country.

The web shop supports price lists to calculate the sale price and the non sale price.

New Modules

Stock Product Location Place

The Stock Product Location Place Module allows defining the place where each product is stored within each location.

Account SYSCOHADA

The Account SYSCOHADA Module provides templates for the chart of account for OHADA countries.

Account Export

The Account Export Module provides the basis to allow accounting moves to be exported to external accounting software.

Account Export WinBooks

The Account Export WinBooks Module adds support to export accounting data to WinBooks.

Web Shop Product Data Feed

The Web Shop Product Data Feed Module exposes web shop products as a data feed for Google Merchant and Meta for business.

Changes for the System Administrator

Server

It is now possible to update the database without updating the indexes or to create the indexes concurrently. These are useful options when updating busy system.

It is possible to define a timeout for some RPC calls. This helps preventing users from overloading the system with expensive requests.

Changes for the Developer

Server

We added send_message methods to simplify sending emails using python's Message.

A new kind of field fmany2one is now available, which is a type of many2one field but stores a different field to the id. It is used mainly in the infrastructure to create foreign keys based on a model or field name.

The read-only relational fields are no longer copied by default. This was source of various bugs as developers often forgot to disable these from the copy.

Clients

The clients read the xxx2many fields using dotted notation. This avoids making multiple requests when displaying a form with these fields.

The XML ID of a record is now displayed in the log window.

Script

It is possible to configure the scripting client to skip any warning.

Product

It is now possible to generate barcodes for a product using a different type than the one on the identifier.

Stock

The done buttons have been renamed to do.

Location name fields have been added to stock moves. This is useful to customize the information displayed in reports about the source and destination locations.

3 posts - 2 participants

Read full topic

01 May 2024 4:00pm GMT

Anarcat: Tor migrates from Gitolite/GitWeb to GitLab

Note: I've been awfully silent here for the past ... (checks notes) oh dear, 3 months! But that's not because I've been idle, quite the contrary, I've been very busy but just didn't have time to write about anything. So I've taken it upon myself to write something about my work this week, and published this post on the Tor blog which I copy here for a broader audience. Let me know if you like this or not.

Tor has finally completed a long migration from legacy Git infrastructure (Gitolite and GitWeb) to our self-hosted GitLab server.

Git repository addresses have therefore changed. Many of you probably have made the switch already, but if not, you will need to change:

https://git.torproject.org/

to:

https://gitlab.torproject.org/

In your Git configuration.

The GitWeb front page is now an archived listing of all the repositories before the migration. Inactive git repositories were archived in GitLab legacy/gitolite namespace and the gitweb.torproject.org and git.torproject.org web sites now redirect to GitLab.

Best effort was made to reproduce the original gitolite repositories faithfully and also avoid duplicating too much data in the migration. But it's possible that some data present in Gitolite has not migrated to GitLab.

User repositories are particularly at risk, because they were massively migrated, and they were "re-forked" from their upstreams, to avoid wasting disk space. If a user had a project with a matching name it was assumed to have the right data, which might be inaccurate.

The two virtual machines responsible for the legacy service (cupani for git-rw.torproject.org and vineale for git.torproject.org and gitweb.torproject.org) have been shutdown. Their disks will remain for 3 months (until the end of July 2024) and their backups for another year after that (until the end of July 2025), after which point all the data from those hosts will be destroyed, with only the GitLab archives remaining.

The rest of this article expands on how this was done and what kind of problems we faced during the migration.

Where is the code?

Normally, nothing should be lost. All repositories in gitolite have been either explicitly migrated by their owners, forcibly migrated by the sysadmin team (TPA), or explicitly destroyed at their owner's request.

An exhaustive rewrite map translates gitolite projects to GitLab projects. Some of those projects actually redirect to their parent in cases of empty repositories that were obvious forks. Destroyed repositories redirect to the GitLab front page.

Because the migration happened progressively, it's technically possible that commits pushed to gitolite were lost after the migration. We took great care to avoid that scenario. First, we adopted a proposal (TPA-RFC-36) in June 2023 to announce the transition. Then, in March 2024, we locked down all repositories from any further changes. Around that time, only a handful of repositories had changes made after the adoption date, and we examined each repository carefully to make sure nothing was lost.

Still, we built a diff of all the changes in the git references that archivists can peruse to check for data loss. It's large (6MiB+) because a lot of repositories were migrated before the mass migration and then kept evolving in GitLab. Many other repositories were rebuilt in GitLab from parent to rebuild a fork relationship which added extra references to those clones.

A note to amateur archivists out there, it's probably too late for one last crawl now. The Git repositories now all redirect to GitLab and are effectively unavailable in their original form.

That said, the GitWeb site was crawled into the Internet Archive in February 2024, so at least some copy of it is available in the Wayback Machine. At that point, however, many developers had already migrated their projects to GitLab, so the copies there were already possibly out of date compared with the repositories in GitLab.

Software Heritage also has a copy of all repositories hosted on Gitolite since June 2023 and have continuously kept mirroring the repositories, where they will be kept hopefully in eternity. There's an issue where the main website can't find the repositories when you search for gitweb.torproject.org, instead search for git.torproject.org.

In any case, if you believe data is missing, please do let us know by opening an issue with TPA.

Why?

This is an old project in the making. The first discussion about migrating from gitolite to GitLab started in 2020 (almost 4 years ago). But going further back, the first GitLab experiment was in 2016, almost a decade ago.

The current GitLab server dates from 2019, replacing Trac for issue tracking in 2020. It was originally supposed to host only mirrors for merge requests and issue trackers but, naturally, one thing led to another and eventually, GitLab had grown a container registry, continuous integration (CI) runners, GitLab Pages, and, of course, hosted most Git repositories.

There were hesitations at moving to GitLab for code hosting. We had discussions about the increased attack surface and ways to mitigate that, but, ultimately, it seems the issues were not that serious and the community embraced GitLab.

TPA actually migrated its most critical repositories out of shared hosting entirely, into specific servers (e.g. the Puppet Git repository is just on the Puppet server now), leveraging Git's decentralized nature and removing an entire attack surface from our infrastructure. Some of those repositories are mirrored back into GitLab, but the authoritative copy is not on GitLab.

In any case, the proposal to migrate from Gitolite to GitLab was effectively just formalizing a fait accompli.

How to migrate from Gitolite / cgit to GitLab

The progressive migration was a challenge. If you intend to migrate between hosting platforms, we strongly recommend to make a "flag day" during which you migrate all repositories at once. This ensures a smoother transition and avoids elaborate rewrite rules.

When Gitolite access was shutdown, we had repositories on both GitLab and Gitolite, without a clear relationship between the two. A priori, the plan then was to import all the remaining Gitolite repositories into the legacy/gitolite namespace, but that seemed wasteful, particularly for large repositories like Tor Browser which uses nearly a gigabyte of disk space. So we took special care to avoid duplicating repositories.

When the mass migration started, only 71 of the 538 Gitolite repositories were Migrated to GitLab in the gitolite.conf file. So, given that we had hundreds of repositories to migrate:, we developed some automation to "save time". We already automate similar ad-hoc tasks with Fabric, so we used that framework here as well. (Our normal configuration management tool is Puppet, which is a poor fit here.)

So a relatively large amount of Python code was produced to basically do the following:

  1. check if all on-disk repositories are listed in gitolite.conf (and vice versa) and either add missing repositories or delete them from disk if garbage
  2. for each repository in gitolite.conf, if its category is marked Migrated to GitLab, skip, otherwise;
  3. find a matching GitLab project by name, prompt the user for multiple matches
  4. if a match is found, redirect if the repository is non-empty
    • we have GitLab projects that look like the real thing, but are only present to host migrated Trac issues
    • in such cases we cloned the Gitolite project locally and pushed to the existing repository instead
  5. otherwise, a new repository is created in the legacy/gitolite namespace, using the "import" mechanism in GitLab to automatically import the repository from Gitolite, creating redirections and updating gitolite.conf to document the change

User repositories (those under the user/ directory in Gitolite) were handled specially. First, the existing redirection map was checked to see if a similarly named project was migrated (so that, e.g. user/dgoulet/tor is properly treated as a fork of tpo/core/tor). Then the parent project was forked in GitLab and the Gitolite project force-pushed to the fork. This allows us to show the fork relationship in GitLab and, more importantly, benefit from the "pool" feature in GitLab which deduplicates disk usage between forks.

Sometimes, we found no such relationships. Then we simply imported multiple repositories with similar names in the legacy/gitolite namespace, sometimes creating forks between user repositories, on a first-come-first-served basis from the gitolite.conf order.

The code used in this migration is now available publicly. We encourage other groups planning to migrate from Gitolite/GitWeb to GitLab to use (and contribute to) our fabric-tasks repository, even though it does have its fair share of hard-coded assertions.

The main entry point is the gitolite.mass-repos-migration task. A typical migration job looked like:

anarcat@angela:fabric-tasks$ fab -H cupani.torproject.org gitolite.mass-repos-migration 
[...]
INFO: skipping project project/help/infra in category Migrated to GitLab
INFO: skipping project project/help/wiki in category Migrated to GitLab
INFO: skipping project project/jenkins/jobs in category Migrated to GitLab
INFO: skipping project project/jenkins/tools in category Migrated to GitLab
INFO: searching for projects matching fastlane
INFO: Successfully connected to https://gitlab.torproject.org
import gitolite project project/tor-browser/fastlane into gitlab legacy/gitolite/project/tor-browser/fastlane with desc 'Tor Browser app store and deployment configuration for Fastlane'? [Y/n] 
INFO: importing gitolite project project/tor-browser/fastlane into gitlab legacy/gitolite/project/tor-browser/fastlane with desc 'Tor Browser app store and deployment configuration for Fastlane'
INFO: building a new connect to cupani
INFO: defaulting name to fastlane
INFO: importing project into GitLab
INFO: Successfully connected to https://gitlab.torproject.org
INFO: loading group legacy/gitolite/project/tor-browser
INFO: archiving project
INFO: creating repository fastlane (fastlane) in namespace legacy/gitolite/project/tor-browser from https://git.torproject.org/project/tor-browser/fastlane into https://gitlab.torproject.org/legacy/gitolite/project/tor-browser/fastlane
INFO: migrating Gitolite repository project/tor-browser/fastlane to GitLab project legacy/gitolite/project/tor-browser/fastlane
INFO: uploading 399 bytes to /srv/git.torproject.org/repositories/project/tor-browser/fastlane.git/hooks/pre-receive
INFO: making /srv/git.torproject.org/repositories/project/tor-browser/fastlane.git/hooks/pre-receive executable
INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt
INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab"
INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project project/tor-browser/fastlane to category Migrated to GitLab
INFO: skipping project project/bridges/bridgedb-admin in category Migrated to GitLab
[...]

In the above, you can see migrated repositories skipped then the fastlane project being archived into GitLab. Another example with a later version of the script, processing only user repositories and showing the interactive prompt and a force-push into a fork:

$ fab -H cupani.torproject.org  gitolite.mass-repos-migration --include 'user/.*' --exclude '.*tor-?browser.*'
INFO: skipping project user/aagbsn/bridgedb in category Migrated to GitLab
[...]
INFO: skipping project user/phw/atlas in category Migrated to GitLab
INFO: processing project user/phw/obfsproxy (Philipp's obfsproxy repository) in category Users' development repositories (Attic)
INFO: Successfully connected to https://gitlab.torproject.org
INFO: user repository detected, trying to find fork phw/obfsproxy
WARNING: no existing fork found, entering user fork subroutine
INFO: found 6 GitLab projects matching 'obfsproxy' (https://gitweb.torproject.org/user/phw/obfsproxy.git)
0 legacy/gitolite/debian/obfsproxy
1 legacy/gitolite/debian/obfsproxy-legacy
2 legacy/gitolite/user/asn/obfsproxy
3 legacy/gitolite/user/ioerror/obfsproxy
4 tpo/anti-censorship/pluggable-transports/obfsproxy
5 tpo/anti-censorship/pluggable-transports/obfsproxy-legacy
select parent to fork from, or enter to abort: ^G4
INFO: repository is not empty: in-pack: 2104, packs: 1, size-pack: 414
fork project tpo/anti-censorship/pluggable-transports/obfsproxy into legacy/gitolite/user/phw/obfsproxy^G [Y/n] 
INFO: loading project tpo/anti-censorship/pluggable-transports/obfsproxy
INFO: forking project user/phw/obfsproxy into namespace legacy/gitolite/user/phw
INFO: waiting for fork to complete...
INFO: fork status: started, sleeping...
INFO: fork finished
INFO: cloning and force pushing from user/phw/obfsproxy to legacy/gitolite/user/phw/obfsproxy
INFO: deleting branch protection: <class 'gitlab.v4.objects.branches.ProjectProtectedBranch'> => {'id': 2723, 'name': 'master', 'push_access_levels': [{'id': 2864, 'access_level': 40, 'access_level_description': 'Maintainers', 'deploy_key_id': None}], 'merge_access_levels': [{'id': 2753, 'access_level': 40, 'access_level_description': 'Maintainers'}], 'allow_force_push': False}
INFO: cloning repository git-rw.torproject.org:/srv/git.torproject.org/repositories/user/phw/obfsproxy.git in /tmp/tmp6orvjggy/user/phw/obfsproxy
Cloning into bare repository '/tmp/tmp6orvjggy/user/phw/obfsproxy'...
INFO: pushing to GitLab: https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy
remote: 
remote: To create a merge request for bug_10887, visit:        
remote:   https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy/-/merge_requests/new?merge_request%5Bsource_branch%5D=bug_10887        
remote: 
[...]
To ssh://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy
 + 2bf9d09...a8e54d5 master -> master (forced update)
 * [new branch]      bug_10887 -> bug_10887
[...]
INFO: migrating repo
INFO: migrating Gitolite repository https://gitweb.torproject.org/user/phw/obfsproxy.git to GitLab project https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy
INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt
INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab"
INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project user/phw/obfsproxy to category Migrated to GitLab
INFO: processing project user/phw/scramblesuit (Philipp's ScrambleSuit repository) in category Users' development repositories (Attic)
INFO: user repository detected, trying to find fork phw/scramblesuit
WARNING: no existing fork found, entering user fork subroutine
WARNING: no matching gitlab project found for user/phw/scramblesuit
INFO: user fork subroutine failed, resuming normal procedure
INFO: searching for projects matching scramblesuit
import gitolite project user/phw/scramblesuit into gitlab legacy/gitolite/user/phw/scramblesuit with desc 'Philipp's ScrambleSuit repository'?^G [Y/n] 
INFO: checking if remote repo https://git.torproject.org/user/phw/scramblesuit exists
INFO: importing gitolite project user/phw/scramblesuit into gitlab legacy/gitolite/user/phw/scramblesuit with desc 'Philipp's ScrambleSuit repository'
INFO: importing project into GitLab
INFO: Successfully connected to https://gitlab.torproject.org
INFO: loading group legacy/gitolite/user/phw
INFO: creating repository scramblesuit (scramblesuit) in namespace legacy/gitolite/user/phw from https://git.torproject.org/user/phw/scramblesuit into https://gitlab.torproject.org/legacy/gitolite/user/phw/scramblesuit
INFO: archiving project
INFO: migrating Gitolite repository https://gitweb.torproject.org/user/phw/scramblesuit.git to GitLab project https://gitlab.torproject.org/legacy/gitolite/user/phw/scramblesuit
INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt
INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab"
INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project user/phw/scramblesuit to category Migrated to GitLab
[...]

Acute eyes will notice the bell used as a notification mechanism as well in this transcript.

A lot of the code is now useless for us, but some, like "commit and push" or is-repo-empty live on in the git module and, of course, the gitlab module has grown some legs along the way. We've also found fun bugs, like a file descriptor exhaustion in bash, among other oddities. The retirement milestone and issue 41215 has a detailed log of the migration, for those curious.

This was a challenging project, but it feels nice to have this behind us. This gets rid of 2 of the 4 remaining machines running Debian "old-old-stable", which moves a bit further ahead in our late bullseye upgrades milestone.

Full transparency: we tested GPT-3.5, GPT-4, and other large language models to see if they could answer the question "write a set of rewrite rules to redirect GitWeb to GitLab". This has become a standard LLM test for your faithful writer to figure out how good a LLM is at technical responses. None of them gave an accurate, complete, and functional response, for the record.

The actual rewrite rules as of this writing follow, for humans that actually like working answers provided by expert humans instead of artificial intelligence which currently seem to be, glorified, mansplaining interns.

git.torproject.org rewrite rules

Those rules are relatively simple in that they rewrite a single URL to its equivalent GitLab counterpart in a 1:1 fashion. It relies on the rewrite map mentioned above, of course.

RewriteEngine on
# this RewriteMap connects the gitweb projects to their GitLab
# equivalent
RewriteMap gitolite2gitlab "txt:/etc/apache2/gitolite2gitlab.txt"
# if this becomes a performance bottleneck, convert to a DBM map with:
#
#  $ httxt2dbm -i mapfile.txt -o mapfile.map
#
# and:
#
# RewriteMap mapname "dbm:/etc/apache/mapfile.map"
#
# according to reports lavamind found online, we hit such a
# performance bottleneck only around millions of entries, which is not our case

# those two rules can go away once all the projects are
# migrated to GitLab
#
# this matches the request URI so we can check the RewriteMap
# for a match next
#
# WARNING: this won't match URLs without .git in them, which
# *do* work now. one possibility would be to match the request
# URI (without query string!) with:
#
# /git/(.*)(.git)?/(((branches|hooks|info|objects/).*)|git-.*|upload-pack|receive-pack|HEAD|config|description)?.
#
# I haven't been able to figure out the actual structure of
# those URLs, so it's really hard to figure out the boundaries
# of the project name here. I stopped after pouring around the
# http-backend.c code in git
# itself. https://www.git-scm.com/docs/http-protocol is also
# kind of incomplete and unsatisfying.
RewriteCond %{REQUEST_URI} ^/(git/)?(.*).git/.*$
# this makes the RewriteRule match only if there's a match in
# the rewrite map
RewriteCond ${gitolite2gitlab:%2|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(git/)?(.*).git/(.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$2}.git/$3 [R=302,L]

# Fallback everything else to GitLab
RewriteRule (.*) https://gitlab.torproject.org [R=302,L]

gitweb.torproject.org rewrite rules

Those are the vastly more complicated GitWeb to GitLab rewrite rules.

Note that we say "GitWeb" but we were actually not running GitWeb but cgit, as the former didn't actually scale for us.

RewriteEngine on
# this RewriteMap connects the gitweb projects to their GitLab
# equivalent
RewriteMap gitolite2gitlab "txt:/etc/apache2/gitolite2gitlab.txt"

# special rule to process targets of the old spec.tpo site and
# bring them to the right redirect on the new spec.tpo site. that should turn, for example:
#
# https://gitweb.torproject.org/torspec.git/tree/address-spec.txt
#
# into:
#
# https://spec.torproject.org/address-spec
RewriteRule ^/torspec.git/tree/(.*).txt$ https://spec.torproject.org/$1 [R=302]

# list of endpoints taken from cgit's cmd.c

# those two RewriteCond are necessary because we don't move
# all repositories at once. once the migration is completed,
# they can be removed.
#
# and yes, they are copied all over the place below
#
# create a match for the project name to check if the project
# has been moved to GitLab
RewriteCond %{REQUEST_URI} ^/(.*).git(/.*)?$
# this makes the RewriteRule match only if there's a match in
# the rewrite map
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
# main project page, like summary below
RewriteRule ^/(.*).git/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L]

# summary
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/summary/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L]

# about
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/about/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L]

# commit
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond "%{QUERY_STRING}" "(.*(?:^|&))id=([^&]*)(&.*)?$"
RewriteRule ^/(.*).git/commit/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%2 [R=302,L,QSD]
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/commit/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L]

# diff, incomplete because can diff arbitrary refs and files in cgit but not in GitLab, hard to parse
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} id=([^&]*)
RewriteRule ^/(.*).git/diff/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1 [R=302,L,QSD]

# patch
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} id=([^&]*)
RewriteRule ^/(.*).git/patch/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1.patch [R=302,L,QSD]

# rawdiff, incomplete because can show only one file diff, which GitLab cannot
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} id=([^&]*)
RewriteRule ^/(.*).git/rawdiff/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1.diff [R=302,L,QSD]

# log
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} h=([^&]*)
RewriteRule ^/(.*).git/log/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/%1 [R=302,L,QSD]
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/log/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L]
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/log(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD$2 [R=302,L]

# atom
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} h=([^&]*)
RewriteRule ^/(.*).git/atom/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/%1 [R=302,L,QSD]
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/atom/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L,QSD]

# refs, incomplete because two pages in GitLab, defaulting to "tags"
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/refs/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tags [R=302,L]
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} h=([^&]*)
RewriteRule ^/(.*).git/tag/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tags/%1 [R=302,L,QSD]

# tree
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} id=([^&]*)
RewriteRule ^/(.*).git/tree(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/%1$2 [R=302,L,QSD]
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/tree(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/HEAD$2 [R=302,L]

# /-/tree has no good default in GitLab, revert to HEAD which is a good
# approximation (we can't assume "master" here anymore)
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/tree/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/HEAD [R=302,L]

# plain
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteCond %{QUERY_STRING} h=([^&]*)
RewriteRule ^/(.*).git/plain(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/raw/%1$2 [R=302,L,QSD]
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/plain(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/raw/HEAD$2 [R=302,L]

# blame: disabled
#RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
#RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
#RewriteCond %{QUERY_STRING} h=([^&]*)
#RewriteRule ^/(.*).git/blame(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/blame/%1$2 [R=302,L,QSD]
# same default as tree above
#RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
#RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
#RewriteRule ^/(.*).git/blame(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/blame/HEAD/$2 [R=302,L]

# stats
RewriteCond %{REQUEST_URI} ^/(.*).git/.*$
RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND
RewriteRule ^/(.*).git/stats/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/graphs/HEAD [R=302,L]

# still TODO:
# repolist: once migration is complete
#
# cannot be done:
# atom: needs a feed token, user must be logged in
# blob: no direct equivalent
# info: not working on main cgit website?
# ls_cache: not working, irrelevant?
# objects: undocumented?
# snapshot: pattern too hard to match on cgit's side

# special case, we keep a copy of the main index on the archive
RewriteRule ^/?$ https://archive.torproject.org/websites/gitweb.torproject.org.html [R=302,L]
# Fallback: everything else to GitLab
RewriteRule .* https://gitlab.torproject.org [R=302,L]

The reference copy of those is available in our (currently private) Puppet git repository.

01 May 2024 2:55pm GMT

Ed Crewe: Software Engineering Hiring and Firing



The jump in interest rates to the highest level in over 20 years that hit in summer 2023 for the US, UK and many other countries is still impacting the Software industry. Rates may be due to drop soon, but currently it has choked off investment, upped borrowing costs and lead to many software companies making engineers redundant to please the markets.

For the UK the estimate is around 8% of software industry jobs made redundant. Although strangely, the overall trend in vacancies for software engineers continues to march upwards, the initial surge after the pandemic dipped last summer but has now recovered.
But if you work in the industry you are bound to have colleagues and friends who have been made redundant, if you are lucky enough to have not been impacted personally.

Given recent history, I thought it may be worth reflecting on my personal experience of the whole hiring and firing process, in the tech industry. It is a UK centric view, but the companies I have worked for in the last 8 years are US software companies.

I have been fired, hired and conducted technical interviews to hire others. Giving me a few different perspectives.

This post is NOT about getting your first Software job

I first got a coding job in the public sector and it was as a self taught web developer in the 1990s, before web development was a thing you could get a degree in. So I initially got a job in IT support, volunteered to act up (ie no pay increase) and built some websites that were needed, then became a full time web developer through a portfolio of work, ie sites.

Today junior developers may have to prove themselves suitable by artificial measures. I skipped these, so I do not have any professional certifications, or any to recommend. I also don't know how to ace coding algorithm or personality profile assessments.

Once you are 5-10 years in to a software career - none of those approaches are used for hiring decisions.

Only large companies are likely to subject you to them, and that is really out of fairness on the juniors who have to go through them, and to screen out dodgy applicants. Screening just needs to be passed, it will not have any input into whether you get the job. Hence Acing the coding interview as promoted by sites such as LeetCode is not even a thing, only passing coding exercise systems in order to start, or switch to, a career as a developer. I would recommend starting an open source project instead, to demonstrate you can actually code.

The majority of small to medium software companies and of job vacancies require experience and in effect have no vacancies for the most junior software grades with less than 3 years under their belt. So they tend not to use any of these filtering methods. They just want to see proof that you are already a developer, and usually base that on face to face interviews and examples of your code you provide them. So much like how I was originally hired back in the 1990s.

I have only been subject to a LeetCode style test once, which was for a generic job application, ie hiring for numbers of SREs of various seniority, for a FANNG.


F I R E D

When you get that unexpected one to one Zoom call with you manager appear in your calendar these days, it is unlikely to be great news 😓

In the majority of cases the firing process, or to be more polite, redundancy, is all about balancing the finances of the whole company or institution. As such it is very unlikely to be about you.

Of course people are also fired as individuals for various reasons, one of which is actually not being any good at their job, failing to get along with their manager, being a bad culture fit, jobs turning out not to be what was advertised, or expressing political views. Since unlike where I once worked, in the UK Education sector, where 50% of staff are union members, US software companies will have less than 1% membership, so don't tend to respond well to dissent.

Mostly this happens via failing probation, at around 15% then maybe another 5% annually for disciplinary / performance improvement failure.

If you want to try getting individually fired then go the overemployed route. Get two or three jobs at once and test how long it takes before the company notices you giving 110% is now only 40% and fire you. The rule of thumb is the larger the company, the longer it takes!

But this post's focus isn't about individual firing, its about organizational hiring and firing.

Firing Reasons

  1. A company may be doing badly in a slow long term way, so it has to chop as part of a restructure and downsize to attempt to fix that.
  2. Alternatively the company could be doing really well. So it gets the attention of a big investment company and is bought up and merged with its rival. To fix overlap and justify the merger - both companies lose 20% of staff.
  3. Maybe it needs to pivot towards a new area (currently likely to be AI) and so chop 20% of its staff so it can hire 15% experienced, and pricey, AI developers.
  4. Or it may just have had a one off external impacting event that hit it financially. So to balance the earnings for that year and keep its share price good, it chops a bunch of staff. It will rehire next year, when it suits the balance sheet. This is the example in which I was made redundant along with 5% of staff, it was a big company, so that was a few thousand people globally.
  5. Finally it may be an industry wide phenomenon as it is with the current redundancies in the software industry. A world clamp down on easy cheap loans means investment company driven industries such as tech. are no longer awash with spare cash. Cut backs look good right now, and keep the share price high.
    Hence redundancies that are nothing to do with the industry itself or its future prospects.

That is mirrored in who is fired. Companies do not keep a log book of gold stars and black marks against each employee. They do not use organizational triggered rounds of redundancies to select individuals to fire. They certainly have not got the capability to accurately determine all the best employees and only fire the worst ones. You will be fired based on what part of the organization you are in, how much it is valued in the current strategy and how much you cost vs others who could do your job. If you are currently between teams / or in a new team or role which has yet to establish itself, when the music stops - like musical chairs, bad luck you are out.

The only personal element may be if a whole team is seen as under performing or difficult to manage it might be axed. No matter that it contains a star performer. Decisions may also be geographic. Lets axe the Greek office, save by withdrawing engineering from a country, which is again how I was made redundant, the rest of my team was in Greece.
Alternatively it may be, fire under 20 staff from each country, to avoid more burdensome regulation for bulk layoffs.

The organization could create an insecure / downturn atmosphere to encourage staff to leave. Because its a lot cheaper for people to leave than the company paying out redundancy settlements.

Redundancy keeps the average employees 😐

As a result in response to significant redundancies an organisation will tend to lose more of the best employees - since they are the most able to move, the most likely to get a big pay rise if they move and the least likely to want to stick around if they see negative organisational change. The software industry has very high staff turnover at almost 20%. Out weighing any nominal idea of removing less efficient staff.

If a company handles things well it may only lose a representative productivity range of staff from best to worst. But a bulk redundancy process is likely to lead to the biggest loss in the top talent, get rid of slightly more of the bottom dwellers and so result in maximising the mediocre!

In summary the answer to 'Why me?' in group redundancies is "because you were there" ... and you didn't have a personal friendship with the CEO 😉. Of course that is why new CEOs are often brought in to restructure - the first step of which is to take an axe to the current C-suite.

Some of the best software engineers I have worked with have been made redundant at some point in their career. Group redundancies are not about you or how well you do your job. But taking it personally and challenging the messenger, with why me?, as demonstrated by recent viral videos, is an understandable emotional response to rejection, and the misguided belief that work aims to be some form of meritocracy, in the same way college might.

LIFO and FIFO

LIFO rather than FIFO is the norm in Firing. New hires are less likely to have established themselves as essential to the company, and have less personal connections within it. More importantly many countries redundancy legislation doesn't kick in until over 2 years of employment and the longer you have been employed the more the company will have to pay to terminate you.

Which means a new hire who has uprooted for their new tech job, will be the most likely to find themselves losing that job when bulk redundancies hit.
But FIFO has its place, next would be older engineers. Some companies don't even hire hands on engineers much over the age of 40, anyway. But staff near retirement have at most only a few years left to contribute and may cost more for the same grade. So encouraging early retirement can be part of the bulk redundancy process.

Prejudicial Firing

Whilst redundancy is all about costs and not about your personal performance. That is not to say companies who pass the redundancy choices down to junior managers may not end up with firing disproportionate numbers of workers who are not from the same background as their manager, ie white USA males, ideally younger than the manager. But prejudice is not personal either. That is pretty much what defines it as prejudice, a pre-judgement of people based on physical characteristics rather than their ability at the job. Also people are least likely to fire staff that they have the most in common with, resulting in prejudicial firing.
Unfortunately it seems many companies with a good diversity policy for hiring, may not have adequate ones for firing. Again resulting in losing more of the higher performing staff.

I have heard of a case where someone got a new manager, who on joining was told to cut from his team, so he fired everyone outside the USA. The worker was so keen to stay at their current employer they went over the head of their manager to senior management and asked for their redundancy to be repealed. Since they had been at the company many years and personally knew senior management, this worked.
Alternatively a more purely cost based restructure may hire all developers from cheaper countries and fire most of them in the US. As happened with Google's Python team recently.

Fight for your job?

The company may set up a pool process for bulk redundancies if numbers are high enough per country, where you can fight for a place on the lifeboat of remaining positions.

In both cases I would recommend that you don't waste time on a company that doesn't value you. If you do stay you risk, dealing with the bulk redundancy aftermath. Which will be present unless the redundancies were for a pivot (3) or one off event (4).
An increased workload, pay freezes, no bonus, needing to over work to justify being kept on, plus a negative work atmosphere.

In a case where I stayed after the department I was in was axed, I had to reapply for a new job which was moved to a different division. The work was less worthwhile and at the time, the employment of in-house software developers as a whole, was questioned as being unnecessary for the organisation. I outstayed my welcome for 18 months of legacy commercial software support, before getting the message and quitting.

Lesson learnt, if you must ask to to stay in your company, via senior management, a pool or reapplication. Make sure you look around and apply for other jobs outside of it at the same time.

You also miss out on a minimum of a couple of months tax free pay as a settlement.

On the other hand, if the redundancy round is for a more minor pivot, and you are happy in the role, it may be well worth staying around to see how things pan out.

Of course you may get no choice in the matter, in which case, get straight into GET HIRED mode, and start the job search. If you can manage it fast enough, you will benefit financially from the whole process. Although if the reason is (5) a sector wide reduction, then it will be take longer and be harder to obtain the usual 20% pay increase that a new position can offer.


H I R E D

Why change jobs (aside from being fired!)

  • It is a lot easier to get a pay rise or promotion by changing companies, than being promoted internally. To fast track your career to a principal or architect top IC role. Or just get a pay rise.
  • Changing jobs gives you much wider experience, of different technology, approaches and cultures. Making you a better engineer.
  • If you have been in your current job over 10 years without significant internal promotions or changes of role then it is detrimental to your CV, indicating you are stuck in a rut and unable to handle change, eg. new technology.
  • You want to shift sectors.
    I changed from public sector web developer, to commercial cloud engineer with one move.
  • You want to get into new technology that is not used in your current role.
    I changed from a Python, Ruby config management automation engineer to a Kubernetes Golang engineer with another.
  • You want to change your role in tech, or leave it entirely. For example get out of sales as a solution architect and back into a more technical role as an SRE.
On that basis many software engineers change jobs every 2 or 3 years for part of their careers. Its expected, the average engineer in a FAANG stays less than 3 years.

Of course you probably need to be in a job for at least 2 years to fully master it.
If your CV has loads of similar positions where you barely make it past the probation period, its marking you out as a failure at those roles == Fail hiring at the first step, the HR CV check.

Upskilling

The other problem is that changing jobs to change roles, even if its just to use a new language or framework can be blocked by roles requiring experience in that area on the CV to get interviewed for the job in the first place. For software engineering that is less of an issue. Since tech changes faster than any other sector.
You just need to prove you have a range of experience and software languages and are willing to learn, early in a technology boom. To catch the cloud engineer bus, I got a job in it in 2016. The US cloud sector was $8 billion back then. It is $600 billion now. Similarly to get on board with Golang and Kubernetes in 2019. In the first few years of a tech boom most companies will initially have to cross train engineers without direct experience. The corollary of that is that in the current downturn attempting to pivot to an established technology, which k8s has become, is going to be much harder.

Market rates

Clearly ML ops and AI data science are current booming areas. The demand so far outstrips current supply that for switching to a more junior Python AI role in them may pay as well as a senior Django web developer for example.

So around £60k for a junior role, but in 3-4 years it should jump to at least £100k for a senior AI engineer. Of course for US salaries add 30%, plus usually free medical, life insurance etc. The lower tax rates cancels out the higher cost of living in the US ... so its UK salary +30% in real terms*. Researching the going rate for the particular role, technical skills and sector you are applying for is a necessary part of the hiring process. In order that you don't let recruitment bargain you down too low.

*
Note that geographic software pay differences are why you often come across engineers of other nationalities emigrating to, and working in the higher paying countries, USA, Canada and Australia. I have worked with many people from the UK and Europe who live in the USA, and Indians who live there or Europe, for example.
Of course as a cheap foreign worker myself, I too stick with US companies partly because they pay rather more than UK ones, even if a lot lower than what I would get if I moved there 😉

Now is the time when such a switch will be easier to accomplish without having to work nights doing courses, certifications and personal projects. The usual means of demonstrating your ability without any work experience.

The caveat here is that moving jobs in a down turn, as we are arguably experiencing currently, can depress the market salary rates and if you are already at the top of those when made redundant, can mean you have to take a pay cut for a year or two rather than face the cost of long term unemployment.

The hiring process for an experienced software engineer role
Interview to Offer should take a month.

If not then the recruitment is likely for a group of roles in an expansion process and from screening and CV, you are not one of the top candidates. You may waiting on the backlog of potential interviewees for a couple of extra months before it properly kicks off.
Or you are told the post is no longer available, sorry!
Even if you would eventually get a post it may stretch your redundancy settlement. Therefore I would not bother pursuing any application process that is looking to be stretching on past 6 weeks.
Start date will be 5 weeks from contract (partly to cater for notice, referee and compliance checks etc)


That makes it 2 months minimum from applying for a role to starting.


The process will consist of a technical assessment task and at least 3 interviews, screening, manager and technical.
With another for introduction to team mates / office etc. which is unlikely to have any effect on the hiring decision unless you are your potential new manager take an instant personal dislike to each other.


The HR screening interview, just checks you are a genuine candidate for the job.
The Manager interview similarly is more about checking you will fit in with the company and team, plus that you have basic personal communication skills.

The Technical Interview is what matters

Passing the technical interview is what really decides whether you will get a job offer. Sometimes the tech interview may be split into two, one more task and questionnaire based and the other more discussion. Often the initial task part will be given as WFH.

The technical interview will consist of technical questions to explore whether you have the knowledge and experience required, plus some thing to confirm you can write code and discuss that code, for a developer or SRE role. For the former it would likely be application code whilst for the latter automation code.
For a more purely system administration / IT support role it will involve specifying your processes for resolving issues.

If you are unlucky and it is an in person interview, you may have to whiteboard pseudo code live in response to a changing task described to you on the spot. Although I have only had that once. More common, especially for hybrid / remote roles, is the take away task. To be completed in a 'few hours' at most.

It is possible that either of the above could be replaced by another source for your code. Talking through one of your open source packages, if you have any. Or talking through one or two longer automated coding exercise assessed tasks. I have never come across either of these though.

The main point is that the core of any technical interview for a developer related role will involve talking through code you have written, as a kicking off point to check your understanding of the code, how it could be improved, how you would tackle scaling, or a new exemplar functional requirement. Its faults and features.

You will be asked to talk through past code or technical work in a more generic manner in response to standard questions along the lines of examples of your past work that show how you fit the job.

Preparing for the Technical Interview

It doesn't take much to work out that a 20% pay rise is worth, a day's worth of work a week.
Assuming you stay in your new job for 2 or 3 years - that is equivalent to 6 months pay.
On that basis even doing a week of work to apply to, and prepare for a single job is still very well worth it, if you get the job.

Adopting a scatter gun approach, ie applying with a generic CV and covering letter to 10 or more jobs, is a waste of time in my view. If you need a new job, then it should be one you are genuinely interested in and research. That means probably you should only have a maximum of 3 tailored applications on the go at once. Even when I was made redundant (and about to get married) I think I limited myself to 4 job applications in total, with one primary one that thankfully I did end up getting.

There are many sites that can advise how best to do that, based on the framework that your hiring will be decided upon I have outlined above. I think preparing some Challenge Action Result stories targeted at the details of the new employer is useful. Plus spending a day or so refining that '2 hours' development task. Researching the company and preparing specific questions and perhaps suggestions for your interviewers.

Being a Technical Interviewer

From the other side of the table, clearly candidates need to show sufficient competency for the post. They may show it, but only within a totally different technical stack. Smaller companies tend to have less capacity and time to get people up to speed with new tech. So will likely fail these candidates even though they are capable of doing the job eventually.
The technical interviewers will tend to pair on the assessment - to improve its consistency. Swapping partners for interviews regularly also helps.

The assessment process is likely to use some online system such as Jobvite or Greenhouse where each interviewer assesses the candidate. Finally summarising it all with a recommendation for strong pass, pass or fail. Sometimes for a specific post and grade, otherwise the assessment can include a grade recommendation. The manager then rubber stamps that assuming appropriate funding is available. HR's job is to beat the candidate down to the lowest reasonable price, without going so low the candidate walks away.

A healthy growing company will tend to have a rolling recruitment process as they expect to be increasing head count in proportion to customers and revenue. On that basis they will likely be aiming to recruit anyone with a good pass, plus maybe most of the passes too.

Given that engineering jobs are highly specialised and require relevant experience I have not seen cases of way more interviewees than jobs. Currently, even with all the redundancies, there is still an under supply of engineers.
Also the approach for HR will be to set experience and skills pre-requisites for the roles that will keep the numbers down for those who make it to technical interview to around double the number of vacancies. Since it takes out a day of work for each interviewing engineer, to prep, interview and assess.

HIRING SUMMARY

You must pass each of the first 5 or 6 steps to get to the next one and get the job.

  1. HR check written application is a plausible candidate
  2. FAANG sized company - automated quiz / Leetcode style challenge - to reduce the numbers - because they get way more speculative applicants.
  3. Recruiter chat to check candidate is genuine and available
  4. CV skills / experience check vs other applicants to shortlist those worth interviewing
  5. Technical task could be takeaway or whiteboard / questionairre interview
  6. Technical interview, in person or Zoom with engineers

  7. Manager interview / introduction to team mates.
  8. Recruiter chat. Negotiate exact salary. Agree start date.
  9. Contract is signed, YOU ARE HIRED.






01 May 2024 2:47pm GMT

Real Python: Python Sequences: A Comprehensive Guide

A phrase you'll often hear is that everything in Python is an object, and every object has a type. This points to the importance of data types in Python. However, often what an object can do is more important than what it is. So, it's useful to discuss categories of data types and one of the main categories is Python's sequence.

In this tutorial, you'll learn about:

  • Basic characteristics of a sequence
  • Operations that are common to most sequences
  • Special methods associated with sequences
  • Abstract base classes Sequence and MutableSequence
  • User-defined mutable and immutable sequences and how to create them

This tutorial assumes that you're familiar with Python's built-in data types and with the basics of object-oriented programming.

Get Your Code: Click here to download the free sample code that you'll use to learn about Python sequences in this comprehensive guide.

Take the Quiz: Test your knowledge with our interactive "Python Sequences: A Comprehensive Guide" quiz. You'll receive a score upon completion to help you track your learning progress:


Python Sequences: A Comprehensive Guide

Interactive Quiz

Python Sequences: A Comprehensive Guide

In this quiz, you'll test your understanding of sequences in Python. You'll revisit the basic characteristics of a sequence, operations common to most sequences, special methods associated with sequences, and how to create user-defined mutable and immutable sequences.

Building Blocks of Python Sequences

It's likely you used a Python sequence the last time you wrote Python code, even if you don't know it. The term sequence doesn't refer to a specific data type but to a category of data types that share common characteristics.

Characteristics of Python Sequences

A sequence is a data structure that contains items arranged in order, and you can access each item using an integer index that represents its position in the sequence. You can always find the length of a sequence. Here are some examples of sequences from Python's basic built-in data types:

Python
>>> # List
>>> countries = ["USA", "Canada", "UK", "Norway", "Malta", "India"]
>>> for country in countries:
...     print(country)
...
USA
Canada
UK
Norway
Malta
India
>>> len(countries)
6
>>> countries[0]
'USA'

>>> # Tuple
>>> countries = "USA", "Canada", "UK", "Norway", "Malta", "India"
>>> for country in countries:
...     print(country)
...
USA
Canada
UK
Norway
Malta
India
>>> len(countries)
6
>>> countries[0]
'USA'

>>> # Strings
>>> country = "India"
>>> for letter in country:
...     print(letter)
...
I
n
d
i
a
>>> len(country)
5
>>> country[0]
'I'
Copied!

Lists, tuples, and strings are among Python's most basic data types. Even though they're different types with distinct characteristics, they have some common traits. You can summarize the characteristics that define a Python sequence as follows:

  • A sequence is an iterable, which means you can iterate through it.
  • A sequence has a length, which means you can pass it to len() to get its number of elements.
  • An element of a sequence can be accessed based on its position in the sequence using an integer index. You can use the square bracket notation to index a sequence.

There are other built-in data types in Python that also have all of these characteristics. One of these is the range object:

Python
>>> numbers = range(5, 11)
>>> type(numbers)
<class 'range'>

>>> len(numbers)
6

>>> numbers[0]
5
>>> numbers[-1]
10

>>> for number in numbers:
...     print(number)
...
5
6
7
8
9
10
Copied!

You can iterate through a range object, which makes it iterable. You can also find its length using len() and fetch items through indexing. Therefore, a range object is also a sequence.

You can also verify that bytes and bytearray objects, two of Python's built-in data structures, are also sequences. Both are sequences of integers. A bytes sequence is immutable, while a bytearray is mutable.

Special Methods Associated With Python Sequences

In Python, the key characteristics of a data type are determined using special methods, which are defined in the class definitions. The special methods associated with the properties of sequences are the following:

  • .__iter__(): This special method makes an object iterable using Python's preferred iteration protocol. However, it's possible for a class without an .__iter__() special method to create iterable objects if the class has a .__getitem__() special method that supports iteration. Most sequences have an .__iter__() special method, but it's possible to have a sequence without this method.
  • .__len__(): This special method defines the length of an object, which is normally the number of elements contained within it. The len() built-in function calls an object's .__len__() special method. Every sequence has this special method.
  • .__getitem__(): This special method enables you to access an item from a sequence. The square brackets notation can be used to fetch an item. The expression countries[0] is equivalent to countries.__getitem__(0). For sequences, .__getitem__() should accept integer arguments starting from zero. Every sequence has this special method. This method can also ensure an object is iterable if the .__iter__() special method is missing.

Therefore, all sequences have a .__len__() and a .__getitem__() special method and most also have .__iter__().

However, it's not sufficient for an object to have these special methods to be a sequence. For example, many mappings also have these three methods but mappings aren't sequences.

A dictionary is an example of a mapping. You can find the length of a dictionary and iterate through its keys using a for loop or other iteration techniques. You can also fetch an item from a dictionary using the square brackets notation.

This characteristic is defined by .__getitem__(). However, .__getitem__() needs arguments that are dictionary keys and returns their matching values. You can't index a dictionary using integers that refer to an item's position in the dictionary. Therefore, dictionaries are not sequences.

Slicing in Python Sequences

Read the full article at https://realpython.com/python-sequences/ »


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

01 May 2024 2:00pm GMT

Zero to Mastery: Python Monthly Newsletter 💻🐍

53rd issue of Andrei Neagoie's must-read monthly Python Newsletter: Python Security Best Practices, State of Python 2024, Prompt Engineering, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.

01 May 2024 10:00am GMT

30 Apr 2024

feedPlanet Python

PyCoder’s Weekly: Issue #627 (April 30, 2024)

#627 - APRIL 30, 2024
View in Browser »

The PyCoder’s Weekly Logo


PEP 686: Make UTF-8 Mode Default

This Python Enhancement Proposal outlines making UTF-8 the default throughout Python. This takes the addition of Unicode introduced in Python 3 to its full extent, applying it to file encoding, pipes, and more. Mechanisms for other encoding are still supported. This PEP is targeted for Python 3.15.
PEPS

What's Lazy Evaluation in Python?

This tutorial explores lazy evaluation in Python and looks at the advantages and disadvantages of using lazy and eager evaluation methods. By the end of this tutorial, you'll clearly understand which approach is best for you, depending on your needs.
REAL PYTHON

Build Your Own AI CLI Agent with Open Source by Pieces (OSP)

alt

Unlock the power of Pieces, right in your terminal! Our open-source CLI plugin helps you manage code snippets, chat with your on-device AI copilot, and even auto-generate commit messages. Join our community to refine your Python skills and influence a product used by 1000s of devs. Contribute today →
PIECES sponsor

Serverless Python in 2024

Talk Python interviews Tony Sherman and they discuss the current state of serverless computing in the Python world, including some of the newer tools and best practices.
KENNEDY & SHERMAN podcast

Django Developers Survey 2023 Results

JETBRAINS

Djangonauts Space Session 2 Applications Open!

DJANGONAUTS

PyPy v7.3.16 Release

PYPY

PEP 745: Python 3.14 Release Schedule

PEPS

Quiz: Writing Unit Tests for Your Code With unittest

REAL PYTHON

Discussions

High Quality Python Scripts or Small Libraries to Learn From?

HACKER NEWS

Articles & Tutorials

Filter Sensitive Contents From Django's Error Reports

Django has the ability to automatically email admins when a 500 error occurs. These kinds of errors can potentially contain sensitive information though, so there are decorators to hide these values. This post covers those as well as how to filter data when using Sentry.
GONÇALO VALÉRIO

Asyncio Coroutine Object Methods in Python

The async and await keywords that form Python's coroutine mechanism can be used for class methods as well as the more common case of functions. This article shows you how you can use asyncio with your objects.
JASON BROWNLEE

How to Prevent Data Leakage in pandas & scikit-learn

How you impute missing values in machine learning data sets can effect the quality of your training. This article teaches you what data leakage is and what steps you should take to avoid it.
DATASCHOOL

An Open Letter Regarding the DjangoCon Europe CfP

Putting on a conference is a complex matter and an attempt to clarify how future DjangoCons in Europe would be structured has resulted in push-back. This open letter is by a Django board member explaining the situation and a hope of how to move forward.
DJANGO SOFTWARE FOUNDATION

Python Basics: Lists and Tuples

In this video course, you'll learn about Python lists and tuples, including how to define and manipulate them in your code. By the end of the course, you'll be ready to effectively use lists and tuples in your programming projects.
REAL PYTHON course

Don't Lie in Interviews

This strongly worded opinion piece by Nat is in reaction to common advice given on Reddit and similar boards. Nat counters it all with "don't lie in interviews". Strong language warning.
NAT BENNETT

Fake Job Interviews Target Devs With New Python Backdoor

"A new campaign tracked as "Dev Popper" is targeting software developers with fake job interviews in an attempt to trick them into installing a Python remote access trojan (RAT)."
BILL TOULAS

Why You Need a "WTF Notebook"

There's a very specific reputation Nat wants to have on a team: "Nat helps me solve my problems. Nat get things I care about done." Keeping a WTF notebook helps him do just that.
NAT BENNETT

Write Unit Tests for Your Python Code With ChatGPT

In this tutorial, you'll learn how to use ChatGPT to generate tests for your Python code. You'll use the chat to create doctest, unittest, and pytest tests for your code.
REAL PYTHON

Leibniz Formula for Π in Python, JavaScript, and Ruby

This is a bare-bones, side-by-side comparison of the Leibniz formula for calculating pi in Python, JavaScript, and Ruby, along with performance measurements.
PETER BENGTSSON

Better Test Parametrisation in pytest

This "Things I've Learned" post discusses how to take advantage of test parameterisation in pytest.
RODRIGO GIRÃO SERRÃO

Projects & Code

All Python 2023 Conference Talks Google Sheet

HH91

zpy: ZSH Helpers for Python Venvs, With Uv or Pip-Tools

GITHUB.COM/ANDYDECLEYRE

django-typescript-routes: Typescript Routes From a URL Conf

GITHUB.COM/BUTTONDOWN

pipxu: Install in Isolated Environments Using UV

GITHUB.COM/BULLETMARK

PyOptInterface: Interface for Mathematical Optimization

GITHUB.COM/METAB0T

Events

Weekly Real Python Office Hours Q&A (Virtual)

May 1, 2024
REALPYTHON.COM

Canberra Python Meetup

May 2, 2024
MEETUP.COM

Sydney Python User Group (SyPy)

May 2, 2024
SYPY.ORG

TOUFU

May 4 to May 5, 2024
OHTOUFU.COM

PyDelhi User Group Meetup

May 4, 2024
MEETUP.COM

Melbourne Python Users Group, Australia

May 6, 2024
J.MP


Happy Pythoning!
This was PyCoder's Weekly Issue #627.
View in Browser »

alt


[ Subscribe to 🐍 PyCoder's Weekly 💌 - Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

30 Apr 2024 7:30pm GMT

PyCharm: PyCharm 2024.1.1 Is Here! AI Assistant in Community Edition, Enhanced Endpoints Tool Window, and Navigation and Refactoring Across Notebooks and Scripts

Enhancements in the Endpoints tool window, extended GitHub gists support for notebooks, and navigation and refactoring across notebooks and scripts - these are just some of the improvements you'll find in PyCharm 2024.1.1!

You can download the latest version from our download page or update your current version through our free Toolbox App.

Key features

JetBrains AI Assistant in PyCharm Community Edition

JetBrains AI Assistant is now available in version 2024.1.1 of PyCharm Community Edition! With features ranging from smart suggestions to code generation, now Community Edition users can also enhance their coding journey with AI Assistant.

AI assistant

Improvements to the Endpoints tool window

Search URLs faster and more efficiently with the improved Endpoints tool window in PyCharm 2024.1.1. Use the dedicated Endpoints tab in Search Anywhere to have all your endpoints grouped by application with their routes displayed.

Endpoints tool window

Navigation and refactoring across notebooks and scripts

Enjoy navigation and refactoring between notebooks and Python scripts within a single project in PyCharm. Find declarations or usages easily, use the Rename refactoring, and have our full spectrum of code inspections at your disposal. Changes are synchronized across file types, so if you employ any of these features in a notebook, they will automatically be applied to the related script, and vice versa.

Navigation and refactoring across notebooks and scripts

Create gists from Jupyter notebooks

You can share Jupyter notebooks seamlessly and quickly now that PyCharm offers full support for GitHub gists. Create a gist for a single notebook or select several files in the Project tool window and create a Git repo with all of them at once.

Create gists from Jupyter notebooks

DataFrame statistics and distribution histograms

Review essential statistics directly within DataFrame headers in both Jupyter notebooks and Python scripts. Gain instant insights into how your data is distributed via the histograms provided in the DataFrame header.

DataFrame statistics and distribution histograms

IPython config file in the console

Save time by configuring your IPython console automatically using config files. Eliminate the need to import dependencies manually every time you use the console.

IPython config file in the console

And that's not all! Please visit our What's New page to discover other improvements in PyCharm 2024.1.1. You can also check out our full release notes for all the details to ensure you don't miss out on trying any of the enhancements.

Thank you for your continued support as we strive to improve your PyCharm experience. Please report any bugs through our issue tracker so we can take care of them as soon as possible. Connect with us on X (formerly Twitter) to share your valuable feedback on PyCharm 2024.1.1!

30 Apr 2024 6:28pm GMT

Mike Driscoll: How to Watermark a Graph with Matplotlib

Matplotlib is one of the most popular data visualization packages for the Python programming language. It allows you to create many different charts and graphs. This tutorial focuses on adding a "watermark" to your graph. If you need to learn the basics, you might want to check out Matplotlib-An Intro to Creating Graphs with Python.

Let's get started!

Installing Matplotlib

If you don't have Matplotlib on your computer, you must install it. Fortunately, you can use pip, the Python package manager utility that comes with Python.

Open up your terminal or command prompt and run the following command:

python -m pip install matplotlib

Pip will now install Matplotlib and any dependencies that Matplotlib needs to work properly. Assuming that Matplotlib installs successfully, you are good to go!

Watermarking Your Graph

Adding a watermark to a graph is a fun way to learn how to use Matplotlib. For this example, you will create a simple bar chart and then add some text. The text will be added at an angle across the graph as a watermark.

Open up your favorite Python IDE or text editor and create a new Python file. Then add the following code:

import matplotlib.pyplot as plt


def bar_chart(numbers, labels, pos):
    fig = plt.figure(figsize=(5, 8))
    plt.bar(pos, numbers, color="red")

    # add a watermark
    fig.text(1, 0.15, "Mouse vs Python",
             fontsize=45, color="blue",
             ha="right", va="bottom", alpha=0.4,
             rotation=25)

    plt.xticks(ticks=pos, labels=labels)
    plt.show()


if __name__ == "__main__":
    numbers = [2, 1, 4, 6]
    labels = ["Electric", "Solar", "Diesel", "Unleaded"]
    pos = list(range(4))
    bar_chart(numbers, labels, pos)

Your bar_chart() function takes in some numbers, labels and a list of positions for where the bars should be placed. You then create a figure to put your plot into. Then you create the bar chart using the list of bar positions and the numbers. You also tell the chart that you want the bars to be colored "red".

The next step is to add a watermark. To do that, you call fig.text() which lets you add text on top of your plot. Here is a quick listing of the arguments that you need to pass in:

The last bit of code in bar_chart() adds the ticks and labels to the bottom of the plot.

When you run this code, you will see something like this:

Bar chart with a watermark made with matplotlib

Isn't that neat? You now have a simple plot, and you know how to add semi-transparent text to it, too!

Wrapping Up

Proper attribution is important in academics and business. Knowing how to add a watermark to your data visualization can help you do that. You now have that knowledge when using Matplotlib.

The Matplotlib package can do many other types of plots and provides much more customization than what it covered here. Check out its documentation to learn more!

The post How to Watermark a Graph with Matplotlib appeared first on Mouse Vs Python.

30 Apr 2024 3:01pm GMT

Real Python: Working With Global Variables in Python Functions

A global variable is a variable that you can use from any part of a program, including within functions. Using global variables inside your Python functions can be tricky. You'll need to differentiate between accessing and changing the values of the target global variable if you want your code to work correctly.

Global variables can play a fundamental role in many software projects because they enable data sharing across an entire program. However, you should use them judiciously to avoid issues.

In this video course, you'll:


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

30 Apr 2024 2:00pm GMT

Robin Wilson: What’s the largest building in Southampton? Find out with 5 lines of code

Recently I became suddenly curious about the sizes of buildings in Southampton, UK, where I live. I don't know what triggered this sudden curiosity, but I wanted to know what the largest buildings in Southampton are. In this context, I'm using "largest" to mean largest in terms of land area covered - ie. the area of the outline when viewed in plan view. Luckily I know about useful sources of geographic data, and also know how to use GeoPandas, so I could answer this question pretty quickly - in fact, in only five lines of code. I'll take you through this code below, as an example of a very simple GIS analysis.

First I needed to get hold of the data. I know Ordnance Survey release data on buildings in Great Britain, but to make this even easier we can go to Alastair Rae's website where he has split the OS data up into Local Authority areas. We need to download the buildings data for Southampton, so we go here and download a GeoPackage file.

Then we need to create a Python environment to do the analysis in. You can do this in various ways - with virtualenvs, conda environments or whatever - but you just need to ensure that Jupyter and GeoPandas are installed. Then create a new notebook and you're ready to start coding.

First, we import geopandas:

import geopandas as gpd

and then load the buildings data:

buildings = gpd.read_file("Southampton_buildings.gpkg")

The buildings GeoDataFrame has a load of polygon geometries, one for each building. We can calculate the area of a polygon with the .area property - so to create a new 'area' column in the GeoDataFrame we can run:

buildings['area'] = buildings.geometry.area

I'm only interested in the largest buildings, so we can now sort by this new area column, and take the first twenty entries:

top20 = buildings.sort_values('area', ascending=False).head(20)

We can then use the lovely explore function to show these buildings on a map. This will load an interactive map in the Jupyter notebook:

top20.explore()

If you'd like to save the interactive map to a standalone HTML file then you can do this instead:

top20.explore().save("map.html")

I've done that, and uploaded that HTML file to my website - and you can view it here.

So, putting all the code together, we have:

import geopandas as gpd
buildings = gpd.read_file("Southampton_buildings.gpkg")
buildings['area'] = buildings.geometry.area
top20 = buildings.sort_values('area', ascending=False).head(20)
top20.explore()

Five lines of code, with a simple analysis, resulting in an interactive map, and all with the power of GeoPandas.

Hopefully in a future post I'll do a bit more work on this data - I'd like to make a prettier map, and I'd like to try and find some way to test my friends and see if they can work out what buildings they are.

30 Apr 2024 11:30am GMT

Python Bytes: #381 Python Packages in the Oven

<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://wasmer.io/posts/py2wasm-a-python-to-wasm-compiler"><strong>Announcing py2wasm: A Python to Wasm compiler</strong></a></li> <li><strong>Exploring Python packages with</strong> <a href="https://oven.fming.dev"><strong>Oven</strong></a> <strong>and</strong> <a href="https://pypi-browser.org"><strong>PyPI Browser</strong></a></li> <li><a href="https://www.youtube.com/watch?v=DLBiJ5kYUFg">PyCharm Local LLM</a></li> <li><strong>Google shedding Python devs</strong> <strong>(at</strong> <strong>least in the US).</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=KlLuQ7UT4t8' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="381">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by ScoutAPM: <a href="https://pythonbytes.fm/scout">pythonbytes.fm/scout</a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of </p> <p>the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://wasmer.io/posts/py2wasm-a-python-to-wasm-compiler"><strong>Announcing py2wasm: A Python to Wasm compiler</strong></a></p> <ul> <li>py2wasm converts your Python programs to WebAssembly, running them at 3x faster speeds</li> <li>thanks to <a href="https://nuitka.net/">Nuitka</a></li> </ul> <p><strong>Brian #2:</strong> <strong>Exploring Python packages with</strong> <a href="https://oven.fming.dev"><strong>Oven</strong></a> <strong>and</strong> <a href="https://pypi-browser.org"><strong>PyPI Browser</strong></a></p> <ul> <li><a href="https://pypi.org">pypi.org</a> is great, but there are some handy alternatives</li> <li><a href="https://oven.fming.dev"><strong>Oven</strong></a> <ul> <li>Shows how to install stuff with pip, pdm, rye, and poetry</li> <li>Similar meta and description as PyPI</li> <li>Includes README.md view (no tables yet, though)</li> <li>Nice listing of versions</li> <li>Ability to look at what files are in wheels and tarballs (very cool) </li> <li>Can deploy yourself. Node/Remix app.</li> <li>Really slick.</li> </ul></li> <li><a href="https://pypi-browser.org">PyPI Browser</a> <ul> <li>View versions</li> <li>View wheel and tarball contents.</li> <li>Metadata and contents.</li> <li>No README view</li> <li>Is a Starlette app that you can deploy on your on with a private registry. So that's cool.</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://www.youtube.com/watch?v=DLBiJ5kYUFg">PyCharm Local LLM</a></p> <ul> <li>Pretty awesome full line completer based on a local LLM for PyCharm</li> <li>Requires PyCharm Professional</li> <li>An example, given this partial function in Flask: <pre><code>@blueprint.get('/listing') def listing(): videos = video_service.all_videos() </code></pre></li> </ul> <p>Typing ret →</p> <p><img src="https://python-bytes-static.nyc3.digitaloceanspaces.com/llm-complete.png" alt="img" /></p> <p>That is, typing ret autocompletes to:</p> <pre><code>return flask.render_template('home/listing.html', videos=videos) </code></pre> <p>Which is pretty miraculous, and correct.</p> <p><strong>Brian #4:</strong> <strong>Google shedding Python devs</strong> <strong>(at</strong> <strong>least in the US).</strong></p> <ul> <li><a href="https://techcrunch.com/2024/04/29/google-lays-off-staff-from-flutter-dart-python-weeks-before-its-developer-conference/">Google lays off staff from Flutter, Dart and Python teams weeks before its developer conference</a> - techcrunch</li> <li><a href="https://www.theregister.com/2024/04/29/google_python_flutter_layoffs/">Python, Flutter teams latest on the Google chopping block</a> - The Register <ul> <li>"Despite Alphabet last week <a href="https://www.theregister.com/2024/04/26/register_kettle_ai/">reporting</a> a 57 percent year-on-year jump in net profit to $23.66 billion for calendar Q1, more roles are being expunged as the mega-corp cracks down on costs."</li> <li>"As for the Python team, the current positions have <a href="https://social.coop/@Yhg1s/112332127058328855">reportedly</a> been "reduced" in favor of a new team based in Munich."</li> </ul></li> <li>MK: Related and timely: <a href="https://www.wheresyoured.at/the-men-who-killed-google/">How one power-hungry leader destroyed Google search</a></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://andrewwegner.com/python-gotcha-strip-functions-unexpected-behavior.html">Python Gotcha: strip, lstrip, rstrip can remove more than expected</a> <ul> <li>Reminder: You probably want .removesuffix() and .removeprefix() </li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://lmstudio.ai/blog/llama-3">Using Llama3</a> in <a href="https://lmstudio.ai">LMStudio</a></li> </ul> <p><strong>Joke:</strong> <a href="https://devhumor.com/media/broken-system">Broken System</a></p>

30 Apr 2024 8:00am GMT

29 Apr 2024

feedPlanet Python

PyCon: Meet PyCon US Keynote Speakers

We can't wait to welcome Jay Miller, Kate Chapman, Simon Willison, and Sumana Harihareswara to our stage as PyCon US keynote speakers this year.



We asked each of our keynote speakers:

Check out our interviews with each of our keynote speakers below!


Jay Miller

"I fully believe I owe my entire tech career to the Python community. I met so many amazing people that I would become long lasting friends with."

Kate Chapman

"I'm excited to connect with people who are passionate about free and open source software and to learn about technologies that I haven't spent much time with."

Simon Willison

"Take advantage of the fact that so many people from the worldwide Python community are in the same place at the same time for just a couple of days. Everybody here wants to talk to you. And you should assume that anyone who you think is interesting will find you interesting as well and will want to hear from you."

Sumana Harihareswara

"I do stand-up comedy and theater. I take it seriously, my responsibility to educate and entertain if you're sitting in front of me for 40 minutes."


Register Now

Don't miss out on meeting our keynote speakers in person! Register now for PyCon US before we sell out. As a note, some of our tutorials are already sold out, as well as our hotel room blocks. There are only a few short weeks left before the conference, so don't wait, register today!

Stay in the Loop

The PyCon US website has all the information you need to know about attending our conference. In order to catch all the latest news, be sure to:

Engage with our community on social media by using our official hashtag: #PyConUS.

Thank you for supporting the Python community. We can't wait to meet you all in Pittsburgh in a few short weeks!

29 Apr 2024 2:45pm GMT

Real Python: Python's unittest: Writing Unit Tests for Your Code

The Python standard library ships with a testing framework named unittest, which you can use to write automated tests for your code. The unittest package has an object-oriented approach where test cases derive from a base class, which has several useful methods.

The framework supports many features that will help you write consistent unit tests for your code. These features include test cases, fixtures, test suites, and test discovery capabilities.

In this tutorial, you'll learn how to:

  • Write unittest tests with the TestCase class
  • Explore the assert methods that TestCase provides
  • Use unittest from the command line
  • Group test cases using the TestSuite class
  • Create fixtures to handle setup and teardown logic

To get the most out of this tutorial, you should be familiar with some important Python concepts, such as object-oriented programming, inheritance, and assertions. Having a good understanding of code testing is a plus.

Free Bonus: Click here to download the free sample code that shows you how to use Python's unittest to write tests for your code.

Take the Quiz: Test your knowledge with our interactive "Python's unittest: Writing Unit Tests for Your Code" quiz. You'll receive a score upon completion to help you track your learning progress:


Python's unittest: Writing Unit Tests for Your Code

Interactive Quiz

Python's unittest: Writing Unit Tests for Your Code

In this quiz, you'll test your understanding of Python testing with the unittest framework from the standard library. With this knowledge, you'll be able to create basic tests, execute them, and find bugs before your users do.

Testing Your Python Code

Code testing or software testing is a fundamental part of a modern software development cycle. Through code testing, you can verify that a given software project works as expected and fulfills its requirements. Testing enforces code quality and robustness.

You'll do code testing during the development stage of an application or project. You'll write tests that isolate sections of your code and verify its correctness. A well-written battery or suite of tests can also serve as documentation for the project at hand.

You'll find several different concepts and techniques around testing. Most of them surpass the scope of this tutorial. However, unit test is an important and relevant concept. A unit test is a test that operates on an individual unit of software. A unit test aims to validate that the tested unit works as designed.

A unit is often a small part of a program that takes a few inputs and produces an output. Functions, methods, and other callables are good examples of units that you'd need to test.

In Python, there are several tools to help you write, organize, run, and automate your unit test. In the Python standard library, you'll find two of these tools:

  1. doctest
  2. unittest

Python's doctest module is a lightweight testing framework that provides quick and straightforward test automation. It can read the test cases from your project's documentation and your code's docstrings. This framework is shipped with the Python interpreter as part of the batteries-included philosophy.

Note: To dive deeper into doctest, check out the Python's doctest: Document and Test Your Code at Once tutorial.

The unittest package is also a testing framework. However, it provides a more complete solution than doctest. In the following sections, you'll learn and work with unittest to create suitable unit tests for your Python code.

Getting to Know Python's unittest

The unittest package provides a unit test framework inspired by JUnit, which is a unit test framework for the Java language. The unittest framework is directly available in the standard library, so you don't have to install anything to use this tool.

The framework uses an object-oriented approach and supports some essential concepts that facilitate test creation, organization, preparation, and automation:

  • Test case: An individual unit of testing. It examines the output for a given input set.
  • Test suite: A collection of test cases, test suites, or both. They're grouped and executed as a whole.
  • Test fixture: A group of actions required to set up an environment for testing. It also includes the teardown processes after the tests run.
  • Test runner: A component that handles the execution of tests and communicates the results to the user.

In the following sections, you'll dive into using the unittest package to create test cases, suites of tests, fixtures, and, of course, run your tests.

Organizing Your Tests With the TestCase Class

The unittest package defines the TestCase class, which is primarily designed for writing unit tests. To start writing your test cases, you just need to import the class and subclass it. Then, you'll add methods whose names should begin with test. These methods will test a given unit of code using different inputs and check for the expected results.

Here's a quick test case that tests the built-in abs() function:

Read the full article at https://realpython.com/python-unittest/ »


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

29 Apr 2024 2:00pm GMT

Nikola: Nikola v8.3.1 is out!

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v8.3.1. This release fixes some small bugs, including some introduced by the new Nikola Plugin Manager.

The minimum Python version supported is now 3.8, and we have adopted a formal policy to define the Python versions supported by Nikola.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown - and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola.

Changes

Features

Bugfixes

Other

29 Apr 2024 12:11pm GMT

Zato Blog: API Testing in Pure English

API Testing in Pure English

How to test APIs in pure English

Do you have 20 minutes to learn how to test APIs in pure English, without any programming needed?

Great, the API testing tutorial is here.

Right after you complete it, you'll be able to write API tests as the one below.

Next steps:

29 Apr 2024 8:00am GMT