28 Aug 2025
Planet Python
Zero to Mastery: [August 2025] Python Monthly Newsletter 🐍
69th issue of Andrei Neagoie's must-read monthly Python Newsletter: Python Performance Myths, Do You Need Classes, Good System Design, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.
28 Aug 2025 10:42am GMT
Reuven Lerner: You’re probably using uv wrong
This is adopted from my "Better developers" newsletter: https://BetterDevelopersWeekly.com.
Like many others in the Python world, I've adopted "uv", the do-everything, lightning-fast package manager written in Rust.
uv does it all: For people who just want to download and install packages, it replaces pip. For people who want to keep multiple versions of Python on their computer, it replaces pyenv. For people who want to work on multiple projects at the same time using virtual environments, it handles that, too. And for people who want to develop and distribute Python software, it works for them, also.
Here's the thing, though: If you're using uv as a replacement for one of these tools or problems, then you're probably using it wrong. Yes, uv is a superset of these tools. But the idea is to sweep many of these things under the rug, thanks to the idea of a uv "project." In many ways, a project in uv allows us to ignore virtual environments, ignore Python versions, and even ignore pip.
I know this, because I've used uv the wrong way for quite a while. It was so much faster than pip that I started to say
uv pip install PACKAGE
instead of
pip install PACKAGE
But actually, that's not quite true - I don't really use virtual environments very much, so I would just install packages on my global Python installation:
uv pip install --system PACKAGE
Which works! However, this isn't really the way that things are supposed to be done.
So, how are we supposed to do things?
uv assumes that everything you do will be in a "project." Now, uv isn't unique in this approach; PEP 518 (https://peps.python.org/pep-0518/) from way back in 2016 talked about projects, and specified a file called pyproject.toml that describes a minimal project. The file's specifications have evolved over time, and the official specification is currently at https://packaging.python.org/en/latest/specifications/pyproject-toml/.
For many years, Python programs were just individual files. A bunch of files could be put together into a single folder and treated as a package. The term "project" was used informally at companies and working groups, or among people who wrote Python editors, such as PyCharm and VSCode.
Even without a formal definition, we all kind of know what a package is - a combination of Python and other files, all grouped together into one whole, to solve one set of problems.
A pyproject.toml file is meant to be the ultimate authority regarding a project. TOML format is similar to an INI configuration file, but with Python-like data structures such as strings and integers. It also supports version numbers and comparison operators, allowing us to indicate exact, approximate, "not less than" and "not more than" versions for dependencies.
The minimal, initial pyproject.toml file looks like this:
[project]
name = "myproj"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = []
As you can see, it only defines a "project" section, and then a number of name-value pairs. You can see that I called this project "myproj", and that I created it using Python 3.13 - hence the "requires-python" line. It doesn't do anything just yet, which is why it has an empty list of dependencies.
How and when do you create such a project? How does it intersect with Python's virtual environments? How does it intersect with a Python version manager, such as pyenv, about which I've written previously?
Here's the secret: To use uv correctly, you ignore pyenv. You ignore pip. You ignore venv. You just create and work with a Python project. If you do that from within uv, then you'll basically be letting uv do all of the hard work for you, papering over all of the issues that Python packaging has accumulated over the years.
The thing is, uv does offer a variety of subcommands that let you work with virtual environments, Python versions, and package installation. So it's easy to get lulled into using these parts of uv to replace one or more of them. But if you do that, you're missing the point, and the overall design goals, of uv.
So, how should you be using it?
First: You create a project with "uv init". It is possible to retroactively use uv on an existing directory of code, but let's assume that you want to start a brand-new project. You say
uv init myproj
This creates a subdirectory, "myproj", under the current directory. This directory contains:
- .git, the directory containing Git repo information. So yes, uv assumes that you'll manage your project with Git, and already initializes a new repo.
- .gitignore, with reasonable defaults for anyone coding in Python. It'll ignore __pycache__ directories, pyo and pyc compiled files, the build and dist subdirectories, and a variety of other file types that we don't need to store in Git.
- .python-version, a file that tells uv (and pyenv, if you're using it) what version of Python to use
- main.py, a skeleton file that you can modify (or rename) to use as the base for your application
- pyproject.toml, the configuration file I described earlier
- README.md,
Once the project is created, you can write code to your heart's content, adding files and directories as you see fit. You can commit to the local Git repo, or you can add a remote repo and push to it.
So far, uv doesn't seem to be doing much for us.
But let's say that we want to modify main.py to download the latest edition of python.org and display the number of bytes contained on that page. We can say:
import requests
def main():
print("Hello from myproj!")
r = requests.get('https://python.org')
print(f'Content at python.org contains {len(r.content)} bytes.')
if __name__ == "__main__":
main()
If you run it with "python main.py", you'll find that it works just fine, printing a greeting and the number of bytes at python.org.
But you shouldn't be doing that! Using "python main.py" means that you're running whatever version of Python is in your PATH. That might well be different from what uv is using. And (as we'll see in a bit) it likely has access to a different set of packages than uv's installation might have.
Rather, you should be running the program with "uv run python main.py". Running your program via "uv" means that it'll take your pyproject.toml configuration file into account.
Why would we care? Because pyproject.toml is shared among all of the people working on your project. It ensures that they're in sync regarding not only the version of Python you're using, but also the libraries and tools you're using, too. (We'll get to packages in just a moment.) If I make sure to configure everything correctly in "pyproject.toml", then everyone on my team will have an identical environment when they run my code. It also means that if we install our code on a project system, it'll also use things correctly.
So, what happens when I run it?
❯ uv run python main.py
Using CPython 3.13.5 interpreter at: /Users/reuven/.pyenv/versions/3.13.5/bin/python3.13
Creating virtual environment at: .venv
Traceback (most recent call last):
File "/Users/reuven/Desktop/myproj/main.py", line 1, in <module>
import requests
ModuleNotFoundError: No module named 'requests'
As we can see, "requests" is not installed. But wait - we just saw that it's installed on my system. Indeed, we got a response back from the program!
This is where anyone familiar with virtual environments will start to nod their head, saying, "Good! uv is ensuring that only packages installed in the virtual environment for this project will be available."
And indeed, you can see that uv noticed a lack of a venv, and created one in a hidden subdirectory, ".venv". So "uv run" doesn't just run our program, it does so within the context of a virtual environment.
If you're expecting us to start using "activate" and "pip install" within a venv, you'll be sadly mistaken. That's because uv wants to shield us from such things. Instead, we'll add one or more files to pyproject.toml using "uv add":
❯ uv add requests
Here's what I get:
Resolved 6 packages in 42ms
Installed 5 packages in 13ms
+ certifi==2025.8.3
+ charset-normalizer==3.4.3
+ idna==3.10
+ requests==2.32.4
+ urllib3==2.5.0
These packages were installed in .venv/lib/python3.13/site-packages, which is what we would expect in a virtual environment. But you can mostly ignore the .venv directory. That's because the most important file is pyproject.toml, which we see has been changed via "uv add":
[project]
name = "myproj"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"requests>=2.32.4",
]
We now have one dependency, requests with a version of at least 2.32.4.
In a traditional venv-based project, we would activate the venv, pip install, run, and then deactivate the venv. In the uv world, we use uv to add to our pyproject.toml and to run our program with "uv run". In both cases, the venv is used, but that usage is mostly hidden from view.
But wait a second: It's nice that we indicated what version of requests we need. But what about the packages that requests requires? Moreover, what if our program also requires NumPy, which has components that are compiled from C? How can we be sure that everyone who downloads this project and uses "uv run" is going to use precisely the same versions of the same packages?
The answer is another configuration file, called "uv.lock". This file is written and maintained by uv, and shouldn't ever be touched by us. It should, however, be committed to Git and distributed to everyone running the project. When you use "uv run", uv checks "uv.lock" to ensure that all of the needed packages are installed, and that they are all compatible with one another. If it needs, it'll download and install versions that are missing, too. And "uv.lock" includes the precise filenames that are needed for each package, for each supported architecture and version of Python - for the packages that we explicitly list as dependencies, and those on which the dependencies themselves rely.
If you adopt uv in the way it's meant to be used, you thus end up with a workflow that's less complex than what many Python developers have used before. When you need a package, you "uv add" it. When you want to run your program, you "uv run" it. And so long as you make sure to check "uv.lock" into Git, then anyone else downloading, installing, and running your program via "uv run" will be sure that all libraries are installed and compatible with one another.
The post You're probably using uv wrong appeared first on Reuven Lerner.
28 Aug 2025 8:56am GMT
27 Aug 2025
Planet Python
Real Python: Python 3.14 Preview: Lazy Annotations
Recent Python releases have introduced several small improvements to the type hinting system, but Python 3.14 brings a single major change: lazy annotations. This change delays annotation evaluation until explicitly requested, improving performance and resolving issues with forward references. Library maintainers might need to adapt, but for regular Python users, this change promises a simpler and faster development experience.
By the end of this tutorial, you'll understand that:
- Although annotations are used primarily for type hinting in Python, they support both static type checking and runtime metadata processing.
- Lazy annotations in Python 3.14 defer evaluation until needed, enhancing performance and reducing startup time.
- Lazy annotations address issues with forward references, allowing types to be defined later.
- You can access annotations via the
.__annotations__
attribute or useannotationlib.get_annotations()
andtyping.get_type_hints()
for more robust introspection. typing.Annotated
enables combining type hints with metadata, facilitating both static type checking and runtime processing.
Explore how lazy annotations in Python 3.14 streamline your development process, offering both performance benefits and enhanced code clarity. If you're just looking for a brief overview of the key changes in 3.14, then expand the collapsible section below:
Python 3.14 introduces lazy evaluation of annotations, solving long-standing pain points with type hints. Here's what you need to know:
- Annotations are no longer evaluated at definition time. Instead, their processing is deferred until you explicitly access them.
- Forward references work out of the box without needing string literals or
from __future__ import annotations
. - Circular imports are no longer an issue for type hints because annotations don't trigger immediate name resolution.
- Startup performance improves, especially for modules with expensive annotation expressions.
- Standard tools, such as
typing.get_type_hints()
andinspect.get_annotations()
, still work but now benefit from the new evaluation strategy. inspect.get_annotations()
becomes deprecated in favor of the enhancedannotationlib.get_annotations()
.- You can now request annotations at runtime in alternative formats, including strings, values, and proxy objects that safely handle forward references.
These changes make type hinting faster, safer, and easier to use, mostly without breaking backward compatibility.
Get Your Code: Click here to download the free sample code that shows you how to use lazy annotations in Python 3.14.
Take the Quiz: Test your knowledge with our interactive "Python Annotations" quiz. You'll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python AnnotationsTest your knowledge of annotations and type hints, including how different Python versions evaluate them at runtime.
Python Annotations in a Nutshell
Before diving into what's changed in Python 3.14 regarding annotations, it's a good idea to review some of the terminology surrounding annotations. In the next sections, you'll learn the difference between annotations and type hints, and review some of their most common use cases. If you're already familiar with these concepts, then skip straight to lazy evaluation of annotations for details on how the new annotation processing works.
Annotations vs Type Hints
Arguably, type hints are the most common use case for annotations in Python today. However, annotations are a more general-purpose feature with broader applications. They're a form of syntactic metadata that you can optionally attach to your Python functions and variables.
Although annotations can convey arbitrary information, they must follow the language's syntax rules. In other words, you won't be able to define an annotation representing a piece of syntactically incorrect Python code.
To be even more precise, annotations must be valid Python expressions, such as string literals, arithmetic operations, or even function calls. On the other hand, annotations can't be simple or compound statements that aren't expressions, like assignments or conditionals, because those might have unintended side effects.
Note: For a deeper explanation of the difference between these two constructs, check out Expression vs Statement in Python: What's the Difference?
Python supports two flavors of annotations, as specified in PEP 3107 and PEP 526:
- Function annotations: Metadata attached to signatures of callable objects, including functions and methods-but not lambda functions, which don't support the annotation syntax.
- Variable annotations: Metadata attached to local, nonlocal, and global variables, as well as class and instance attributes.
The syntax for function and variable annotations looks almost identical, except that functions support additional notation for specifying their return value. Below is the official syntax for both types of annotations in Python. Note that <annotation>
is a placeholder, and you don't need the angle brackets when replacing this placeholder with the actual annotation:
Python 3.6+
class Class:
# These two could be either class or instance attributes:
attribute1: <annotation>
attribute2: <annotation> = value
def method(
self,
parameter1,
parameter2: <annotation>,
parameter3: <annotation> = default_value,
parameter4=default_value,
) -> <annotation>:
self.instance_attribute1: <annotation>
self.instance_attribute2: <annotation> = value
...
def function(
parameter1,
parameter2: <annotation>,
parameter3: <annotation> = default_value,
parameter4=default_value,
) -> <annotation>:
...
variable1: <annotation>
variable2: <annotation> = value
To annotate a variable, attribute, or function parameter, put a colon (:
) just after its name, followed by the annotation itself. Conversely, to annotate a function's return value, place the right arrow (->
) symbol after the closing parenthesis of the parameter list. The return annotation goes between that arrow and the colon denoting the start of the function's body.
Note: The right arrow symbol isn't unique to Python. A few other programming languages use it as well but for different purposes. For example, Java and CoffeeScript use it to define anonymous functions, similar to Python's lambdas. This symbol is sometimes referred to as the thin arrow (->
) to distinguish it from the fat arrow (=>
) found in JavaScript and Scala.
As shown, you can mix and match function and method parameters, including optional parameters, with or without annotations. You can also annotate a variable without assigning it a value, effectively making a declaration of an identifier that might be defined later.
Declaring a variable doesn't allocate memory for its storage or even register it in the current namespace. Still, it can be useful for communicating the expected type to other people reading your code or a static type checker. Another common use case is instructing the Python interpreter to generate boilerplate code on your behalf, such as when working with data classes. You'll explore these scenarios in the next section.
To give you a better idea of what Python annotations might look like in practice, below are concrete examples of syntactically correct variable annotations:
Python 3.6+
>>> temperature: float
>>> pressure: {"unit": "kPa", "min": 220, "max": 270}
You annotate the variable temperature
with float
to indicate its expected type. For the variable pressure
, you use a Python dictionary to specify the air pressure unit along with its minimum and maximum values. This kind of metadata could be used to validate the actual value at runtime, generate documentation based on the source code, or even automatically build a command-line interface for a Python script.
Read the full article at https://realpython.com/python-annotations/ »
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
27 Aug 2025 2:00pm GMT
The Python Show: 54 - Neural Networks and Data Visualization with Nicolas Rougier
In this episode, we have the honor of having Nicolas Rougier on our show. Nicolas is a researcher and team leader at the Institute of Neurodegenerative Diseases (Bordeaux, France).
We discuss how Nicolas utilizes computational models and neural networks in his research on the brain. We also talk about Nicolas's history with Python, his work on Glumpy and VisPy, and much, much more!
Links
-
Scientific Visualization: Python & Matplotlib, an open access book on scientific visualization.
-
From Python to Numpy, an open access book on numerical computing
-
100 Numpy Exercises is a collection of 100 numpy exercises, from easy to hard.
27 Aug 2025 1:24pm GMT
Real Python: Quiz: Python Annotations
In this quiz, you'll test your understanding of lazy annotations introduced in Python 3.14.
By working through this quiz, you'll revisit how they improve performance, address forward reference issues, and support both static type checking and runtime processing.
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
27 Aug 2025 12:00pm GMT
Seth Michael Larson: The vulnerability might be in the proof-of-concept
The Security Developer-in-Residence role at the Python Software Foundation is funded by Alpha-Omega. Thanks to Alpha-Omega for sponsoring security in the Python ecosystem.
I'm on the security team for multiple open source projects with ~medium levels of report volume. Over the years, you see patterns in how reporters try to have a report accepted as a vulnerability in the project.
One pattern that I see frequently is submitting proof-of-concept code that itself contains the vulnerability. However, the project code is also used, so the reporters try to convince you that the vulnerability is in the project code.
Here's a simplified version of reports that the Python Security Response Team sees fairly frequently:
user_controlled_value = "..."
# ...(some layers of indirection)
eval(user_controlled_value) # RCE!!!
This isn't a vulnerability in Python, clearly. Python is designed to execute code, so if you tell Python to execute code it will do so. But it can be less obvious when there's a more subtle vulnerability in the proof-of-concept. The below example filters user-controlled URLs and returns an HTTP response for acceptable URLs:
import urllib3
from urllib.parse import urlparse
def safe_url_opener(url):
input_url = urlparse(url)
input_scheme = input_url.scheme
input_host = input_url.hostname
block_schemes = ["file", "ftp"]
block_hosts = ["evil.com"]
if input_scheme in block_schemes:
return None
if input_host in block_hosts:
return None
return urllib3.request("GET", url)
The reporter claimed that there was a vulnerability in urlparse
because the parser behaved differently than urllib3.request
and thus an attacker would be able to circumvent the block list with a URL crafted to exploit these differences ("SSRF").
Keep in mind both urlparse
and urllib3
both implement RFC 3986, but due to backwards compatibility urllib3 supports "scheme-less" URLs in the form "localhost:8080/
" to be accepted and handled as "http://localhost:8080/
".
I didn't agree with this reporters determination, instead I asserted that the safe_url_opener()
function contains the vulnerability. To prove this, I implemented a safe_url_opener()
function that uses urlparse
with urllib3
securely:
import urllib3
from urllib.parse import urlparse
def safe_url_opener(unsafe_url):
safe_url = urlparse(unsafe_url)
# Use an allow-list, not a block-list.
allow_schemes = ["https"]
allow_hosts = ["good.com"]
if safe_url.scheme not in allow_schemes:
return
if safe_url.hostname not in allow_hosts:
return
# Check the URL doesn't have components we don't expect.
if safe_url.auth is not None or safe_url.port is not None:
return
# Use the safe parsed values, not the unsafe URL.
pool = urllib3.HTTPSConnectionPool(
host=safe_url.hostname,
assert_hostname=safe_url.hostname,
)
target = safe_url.path or "/"
if safe_url.query:
target += f"?{safe_url.query}"
return pool.request("GET", target)
The above program could be even more secure and use urllib3's urllib3.util.parse_url()
function to completely remove SSRF potential.
This post is meant as a reminder to security teams and maintainers of open source projects that sometimes the vulnerability is in the proof-of-concept and not your own project's code. Having a security policy (e.g. "urlparse
strictly implements RFC 3986 regardless of other implementation behaviors") and threat model (e.g. "users must not combine with other URL parsers") documented for public APIs means security reports can be treated consistently while minimizing stress and reducing repeated-research into historical decisions around API design.
Thanks for keeping RSS alive! ♥
27 Aug 2025 12:00am GMT
Quansight Labs Blog: Expressions are coming to pandas!
`pd.col` will soon be a real thing!
27 Aug 2025 12:00am GMT
26 Aug 2025
Planet Python
PyCoder’s Weekly: Issue #696: Namespaces, with, functools.Placeholder, and More (Aug. 26, 2025)
#696 - AUGUST 26, 2025
View in Browser »
Python Namespace Packages Are a Pain
Namespace packages are a way of splitting a Python package across multiple directories. Namespaces can be implicit or explicit and this can cause confusion. This article explains why and recommends what to do.
JOSH CANNON
Python's with
Statement: Manage External Resources Safely
Understand Python's with
statement and context managers to streamline the setup and teardown phases in resource management. Start writing safer code today!
REAL PYTHON
functools.Placeholder
Learn how to use functools.Placeholder
, new in Python 3.14, with a real-life example.
RODRIGO GIRÃO SERRÃO
Articles & Tutorials
Agentic Al Programming With Python
Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. This episode of Talk Python To Me interviews Matthew Makai and they cut through the hype and incentives to define "agentic," and get hands-on with how it can work for you.
KENNEDY & MAKAI podcast
pytest for Data Scientists
This guide shows how to use pytest to write lightweight yet powerful tests for functions, NumPy arrays, and pandas DataFrames. You'll also learn about parametrization, fixtures, and mocking to make your workflows more reliable and production-ready.
CODECUT.AI • Shared by Khuyen Tran
SciPy, NumPy, and Fostering Scientific Python
What went into developing the open-source Python tools data scientists use every day? This week on the show, we talk with Travis Oliphant about his work on SciPy, NumPy, Numba, and many other contributions to the Python scientific community.
REAL PYTHON podcast
The State of Python 2025
Explore the key trends and actionable ideas from the latest Python Developers Survey, which was conducted jointly by the Python Software Foundation and JetBrains PyCharm and includes insights from over 30,000 developers. Discover the key takeaways in this blog post.
JETBRAINS.COM • Shared by Evgeniia Verbina from JetBrains PyCharm
Preventing Domain Resurrection Attacks
"PyPI now checks for expired domains to prevent domain resurrection attacks, a type of supply-chain attack where someone buys an expired domain and uses it to take over PyPI accounts through password resets."
MIKE FIEDLER
How to Use Redis With Python
"Redis is an open-source, in-memory data structure store that can be used as a database, cache, message broker, or queue" Learn how to use it with Python in this step-by-step tutorial.
APPSIGNAL.COM • Shared by AppSignal
Custom Parametrization Scheme With pytest
Custom parametrisation schemes are not advertised a lot within the pytest community. Learn how they can improve readability and debugging of your tests.
CHRISTOS LIONTOS • Shared by Christos Liontos
Hypothesis Is Now Thread-Safe
Hypothesis is a property-based testing library for Python. In order to move towards comparability with free-threading, the library is now thread safe.
LIAM DEVOE
Single and Double Underscores in Python Names
Learn Python naming conventions with single and double underscores to design APIs, create safe classes, and prevent name clashes.
REAL PYTHON
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
August 27, 2025
REALPYTHON.COM
PyCon Poland 2025
August 28 to September 1, 2025
PYCON.ORG
PyCon Kenya 2025
August 28 to August 31, 2025
PYCON.KE
PyCon Greece 2025
August 29 to August 31, 2025
PYCON.GR
🐍 ¡Cuarta Reunión De Pythonistas GDL!
August 30, 2025
PYTHONISTAS-GDL.ORG
PyData Berlin 2025
September 1 to September 4, 2025
PYDATA.ORG
Limbe
September 1 to September 2, 2025
NOKIDBEHIND.ORG
Django Summit DELSU
September 1 to September 6, 2025
HAMPLUSTECH.COM
PyCon Taiwan
September 6 to September 8, 2025
PYCON.ORG
Happy Pythoning!
This was PyCoder's Weekly Issue #696.
View in Browser »
[ Subscribe to 🐍 PyCoder's Weekly 💌 - Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
26 Aug 2025 7:30pm GMT
Mike Driscoll: Python Books and Courses – Back to School Sale
If you are heading back to school and need to learn Python, consider checking out my sale. You can get 25% off any of my eBooks or courses using the following coupon at checkout: FALL25
My books and course cover the following topics:
- Beginner Python (Python 101)
- Intermediate Python
- Creating GUIs with wxPython
- Working with Excel
- Image Processing
- Creating PDFs with Python
- Working with JupyterLab
- Creating TUIs with Python and Textual
- Python Logging
Start learning Python or widen your Python knowledge today!
The post Python Books and Courses - Back to School Sale appeared first on Mouse Vs Python.
26 Aug 2025 4:32pm GMT
Real Python: Profiling Performance in Python
Do you want to optimize the performance of your Python program to make it run faster or consume less memory? Before diving into any performance tuning, you should strongly consider using a technique called software profiling. It can help you decide whether optimizing the code is necessary and, if so, which parts of the code you should focus on.
Sometimes, the return on investment in performance optimizations just isn't worth the effort. If you only run your code once or twice, or if it takes longer to improve the code than execute it, then what's the point?
When it comes to improving the quality of your code, you'll probably optimize for performance as a final step, if you do it at all. Often, your code will become speedier and more memory efficient thanks to other changes that you make. When in doubt, go through this short checklist to figure out whether to work on performance:
- Testing: Have you tested your code to prove that it works as expected and without errors?
- Refactoring: Does your code need some cleanup to become more maintainable and Pythonic?
- Profiling: Have you identified the most inefficient parts of your code?
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
26 Aug 2025 2:00pm GMT
Seth Michael Larson: SMS URLs
Did you know there are is a URL scheme for sending an "SMS" or text message, similar to mailto:
? SMS URLs are defined in RFC 5724 and are formatted like so:
sms:<recipient(s)>?body=<body>
Here's a bunch of test links with different scenarios you can try on your mobile phone:
Annoyingly, it appears that as of today Apple doesn't implement RFC 5724 correctly for multiple recipients. The first URL won't work on iPhones, but will work on Android. Only the second URL will work on iPhones (and there's not much public explanation as to why that might be).
sms:+15551230001,+15551230002,...?body=Hello%20world!
sms://open?addresses=+15551230001,+15551230002,...&body=Hello%20world!
Thanks for keeping RSS alive! ♥
26 Aug 2025 12:00am GMT
25 Aug 2025
Planet Python
The Lunar Cowboy: Introducing unittest-fixtures
I would like to introduce unittest-fixtures
. The unittest-fixtures package is a helper for the unittest.TestCase class that allows one to define fixtures as simple functions and declare them in your TestCase
using decorators.
The unittest-fixtures package is available now from PyPI.
The following is extracted from the project's README:
Description
unittest-fixtures spun off from my Gentoo Build Publisher project. I use unittest, the test framework in the Python standard library, where it's customary to define a TestCase's fixtures in the .setUp()
method. Having done it this way for years, it occurred to me one day that this goes against OCP. What if instead of cracking open the .setUp()
method to add a fixture to a TestCase one could instead add a decorator? That's what unittest-fixtures allows one to do.
from unittest_fixtures import given
@given(dog)
class MyTest(TestCase):
def test_method(self, fixtures):
dog = fixtures.dog
In the above example, dog
is a fixture function. Fixture functions are passed to the given
decorator. When the test method is run, the fixtures are "instantiated" and attached to the fixtures
keyword argument of the test method.
Fixture functions
Fixture functions are functions that one defines that return a "fixture". For example the above dog fixture might look like this:
from unittest_fixtures import fixture
@fixture()
def dog(fixtures):
return Dog(name="Fido")
Fixture functions are always passed a Fixtures
argument. Because fixtures can depend on other fixtures. For example:
@fixture(dog)
def person(fixtures):
p = Person(name="Jane")
p.pet = fixtures.dog
return p
Fixture functions can have keyword parameters, but those parameters must have defaults.
@fixture
def dog(fixtures, name="Fido"):
return Dog(name=name)
Then one's TestCase can use the where
decorator to passed the parameter:
from unittest_fixtures import given, where
@given(dog)
@where(dog__name="Buddy")
class MyTest(TestCase):
def test_method(self, fixtures):
dog = fixtures.dog
self.assertEqual("Buddy", dog.name)
Duplicating fixtures
The unittest-fixtures library allows one to use a fixture more than once. This can be done by passing the fixture as a keyword argument giving different names to the same fixture. Different parameters can be passed to them:
@given(fido=dog, buddy=dog)
@where(fido__name="Fido", buddy__name="Buddy")
class MyTest(TestCase):
def test_method(self, fixtures):
self.assertEqual("Buddy", fixtures.buddy.name)
self.assertEqual("Fido", fixtures.fido.name)
Fixture-depending fixtures will all use the same fixture, but only if they have the same name. So in the above example, if we also gave the TestCase the person
fixture, that person would have a different dog because it depends on a fixture called "dog". However this will work:
@given(dog, person)
class MyTest(TestCase):
def test_method(self, fixtures):
dog = fixtures.dog
person = fixtures.person
self.assertIs(person.pet, dog)
@where (fixture parameters)
The where
decorator can be used to pass parameters to a fixture function. Fixture functions are not required to take arguments. To pass a parameter to a function, for example pass name
to the dog
fixture it's the name of the function, followed by __
followed by the parameter name. For example: dog__name
. Fixture functions can also have a parameter that is the same name as the fixture itself. For example:
@given(settings)
@where(settings={"DEBUG": True, "SECRET": "sauce"})
class MyTest(TestCase):
...
There are times when one may desire to pass a fixture parameter that uses the value of another fixture, however that value does not get calculated until each test is run. The Param
type allows one to accomplish this:
from unittest_fixtures import Param, given, where
@given(person)
@where(person__name=Param(lambda fixtures: fixtures.name))
@given(name=random_choice)
@where(name__choices=["Liam", "Noah", "Jack", "Oliver"])
class MyTest(TestCase):
...
[!NOTE] In the above example, fixture ordering is important. Given that
person
implicitly depends onname
, thename
fixture needs to be set up first. We do this by declaring it before (lower vertically in the list of decorators) than theperson
fixture.
Fixtures as context managers
Sometimes a fixture will need a setup and teardown process. If unittest-fixtures is supposed to remove the need to open setUp()
, then it must also remove the need to open tearDown()
. And it does this by by defining itself as a generator function. For example:
import tempfile
@fixture()
def tmpdir(fixtures):
with tempfile.TemporaryDirectory() as tempdir:
yield tempdir
Using the unittest.mock
library is another good example of using context manager fixtures.
fixture-depending fixtures
As stated above, fixtures can depend on other fitures. This is done by "declaring" the dependencies in the fixture
decorator. Fixtures are then passed as an argument to the fixture function:
@fixture(settings, tmpdir)
def jenkins(fixtures, root=None):
root = root or fixtures.tmpdir
settings = replace(fixtures.settings, STORAGE_PATH=root)
return Jenkins.from_settings(settings)
The above example shows that one can get pretty fancy... or creative with one's fixture definitions.
Fixtures can also have named dependencies. So in the above example, if one wanted a different tmpdir
than the "global" one:
@fixture(settings, jenkins_root=tmpdir)
def jenkins(fixtures, root=None):
root = root or fixtures.jenkins_root
settings = replace(fixtures.settings, STORAGE_PATH=root)
return Jenkins.from_settings(settings)
If a TestCase used both jenkins
and tmpdir
:
@given(tmpdir, jenkins)
class MyTest(TestCase):
def test_something(self, fixtures):
self.assertNotEqual(fixtures.jenkins.root, fixtures.tmpdir)
Again if the two fixtures have different names then they are two separate fixtures. In general one should not use named fixtures unless one wants multiple fixtures of the same type.
@params (parametrized tests)
Not to be confused with @parametrized
(below) which works similarly. The params
decorator turns a TestCase's methods into parametrized tests, however, unlike @parametrized
, the parameters are passed into the fixtures argument instead of additional arguments to the test method. For example:
from unittest_fixtures import params
@params(number=[1, 2, 3], square=[1, 4, 9])
class MyTest(TestCase):
def test(self, fixtures):
self.assertEqual(fixtures.number**2, fixtures.square)
In the above example, the test
method is called three times. With each iteration the fixtures
parameter has the values:
Fixtures(number=1, square=1)
Fixtures(number=2, square=4)
Fixtures(number=3, square=9)
@parametrized
The @parametrized
decorator that acts as a wrapper for unittest's subtests. Unlike @params
above, this decorator is to be applied to TestCase
methods rather than tests themselves. In this case extra parameters are passed to the test method. This can be used if you only want to parameterize a specific test method in a TestCase
rather than all test methods.
For example:
from unittest_fixtures import parametrized
class ParametrizeTests(TestCase):
@parametrized([[1, 1], [2, 4], [3, 9]])
def test(self, number, square):
self.assertEqual(number**2, square)
The fixtures kwarg may be overridden
The fixtures
keyword argument is automatically passed to TestCase methods when the test is run. The name of the keyword argument can be overridden as follows:
@given(dog)
class MyTest(TestCase):
unittest_fixtures_kwarg = "fx"
def test_method(self, fx):
dog = fx.dog
25 Aug 2025 7:20pm GMT
Caktus Consulting Group: How to migrate from pip-tools to uv
At Caktus, many of our projects use pip-tools
for dependency management. Following Tobias' post How to Migrate your Python & Django Projects to uv, we were looking to migrate other projects to uv
, but the path seemed less clear with existing pip-tools setups. Our requirements are often spread across multiple files, like this:
25 Aug 2025 6:00pm GMT
Real Python: How to Write Docstrings in Python
Writing clear, consistent docstrings in Python helps others understand your code's purpose, parameters, and outputs. In this guide on how to write docstrings in Python, you'll learn about best practices, standard formats, and common pitfalls to avoid, ensuring your documentation is accessible to users and tools alike.
By the end of this tutorial, you'll understand that:
- Docstrings are strings used to document your Python code and can be accessed at runtime.
- Python comments and docstrings have important differences.
- One-line and multiline docstrings are classifications of docstrings.
- Common docstring formats include reStructuredText, Google-style, NumPy-style, and doctest-style.
- Antipatterns such as inconsistent formatting should be avoided when writing docstrings.
Explore the following sections to see concrete examples and detailed explanations for crafting effective docstrings in your Python projects.
Get Your Code: Click here to download the free sample code that shows you how to write docstrings in Python.
Take the Quiz: Test your knowledge with our interactive "How to Write Docstrings in Python" quiz. You'll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Write Docstrings in PythonTest your knowledge of Python docstrings, including syntax, conventions, formats, and how to access and generate documentation.
Getting Started With Docstrings in Python
Python docstrings are string literals that show information regarding Python functions, classes, methods, and modules, allowing them to be properly documented. They are placed immediately after the definition line in triple double quotes ("""
).
Their use and convention are described in PEP 257, which is a Python Enhancement Proposal (PEP) that outlines conventions for writing docstrings. Docstrings don't follow a strict formal style. Here's an example:
docstring_format.py
def determine_magic_level(magic_number):
"""
Multiply a wizard's favorite number by 3 to reveal their magic level.
"""
return magic_number * 3
Docstrings are a built-in means of documentation. While this may remind you of comments in Python, docstrings serve a distinct purpose. If you're curious and would like to see a quick breakdown of the differences now, open the collapsible section below.
Python comments and docstrings seem a lot alike, but they're actually quite different in a number of ways because they serve different purposes:
Comments | Docstrings |
---|---|
Begin with # |
Are enclosed in triple quotes (""" ) |
Consist of notes and reminders written by developers for other developers | Provide documentation for users and tools |
Are ignored by the Python interpreter | Are stored in .__doc__ and accessible at runtime |
Can be placed anywhere in code | Are placed at the start of modules, classes, and functions |
To summarize, comments explain parts of an implementation that may not be obvious or that record important notes for other developers. Docstrings describe modules, classes, and functions so users and tools can access that information at runtime.
So, while comments and docstrings may look similar at first glance, their purpose and behavior in Python are different. Next, you'll look at one-line and multiline docstrings.
One-Line vs Multiline Docstrings
Docstrings are generally classified as either one-line or multiline. As the names suggest, one-line docstrings take up only a single line, while multiline docstrings span more than one line. While this may appear to be a slight difference, how you use and format them in your code matters.
An important formatting rule from PEP 257 is that one-line docstrings should be concise, while multiline docstrings should have their closing quotes on a new line. You may resort to a one-line docstring for relatively straightforward programs like the one below:
one_line_docstring.py
import random
def picking_hat():
"""Return a random house name."""
houses = ["Gryffindor", "Hufflepuff", "Ravenclaw", "Slytherin"]
return random.choice(houses)
In this example, you see a program that returns a random house as depicted in the classic Harry Potter stories. This is a good example for the use of one-line docstrings.
You use multiline docstrings when you have to provide a more thorough explanation of your code, which is helpful for other developers. Generally, a docstring should contain parameters, return value details, and a summary of the code.
You're free to format docstrings as you like. That being said, you'll learn later that there are common docstring formats that you may follow. Here's an example of a multiline docstring:
multiline_docstring.py
def get_harry_potter_book(publication_year, title):
"""
Retrieve a Harry Potter book by its publication year and name.
Parameters:
publication_year (int): The year the book was published.
title (str): The title of the book.
Returns:
str: A sentence describing the book and its publication year.
"""
return f"The book {title!r} was published in the year {publication_year}."
As you can see, the closing quotes for this multiline docstring appear on a separate line. Now that you understand the difference between one-line and multiline docstrings, you'll learn how to access docstrings in your code.
Ways to Access Docstrings in Python
Unlike code comments, docstrings aren't ignored by the interpreter. They become a part of the program and serve as associated documentation for anyone who wants to understand your program and what it does. That's why knowing how to access docstrings is so useful. Python provides two built-in ways to access docstrings: the .__doc__
attribute and the help()
function.
The .__doc__
Attribute
Read the full article at https://realpython.com/how-to-write-docstrings-in-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
25 Aug 2025 2:00pm GMT
Hugo van Kemenade: EuroPython 2025: A roundup of writeups
Some out-of-context quotes:
- "We can just bump the version and move on." - Dr. Brett Cannon
- "You just show up. That's it." - Rodrigo Girão Serrão
- "If it kwargs like a dorg, it's a dorg." - Sebastián Ramírez
- "Our job will be to put the human in." - Paul Everitt
20 July 2025
21 July 2025
23 July 2025
24 July 2025
11 August 2025
12 August 2025
21 August 2025
And a bunch of LinkedIn posts:
- Alicja Kocieniewska
- Diego Russo
- Ece Akdeniz
- Jodie Burchell
- Kseniia Usyk
- Lara Krämer
- Libor Vaněk
- Marco Richetta
- Olena Yefymenko
- Vassiliki Dalakiari
Finally, the official photos and videos should be up soon, and here are my photos.
Header photo: Savannah Bailey's keynote (CC BY-NC-SA 2.0 Hugo van Kemenade).
25 Aug 2025 1:59pm GMT
Real Python: Quiz: How to Write Docstrings in Python
Want to get comfortable writing and using Python docstrings? This quiz helps you revisit best practices, standard conventions, and common tools.
You'll review the basics of docstring syntax, how to read them at runtime, and different formatting styles. For more details, check out the tutorial How to Write Docstrings in Python.
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
25 Aug 2025 12:00pm GMT