18 Jul 2025

feedPlanet Python

Matt Layman: Enhancing Chatbot State Management with LangGraph

Picture this: it's late and I'm deep in a coding session, wrestling with a chatbot that's starting to feel more like a living thing than a few lines of Python. Today's mission? Supercharge the chatbot's ability to remember and verify user details like names and birthdays using LangGraph. Let's unpack the journey, from shell commands to Git commits, and see how this bot got a memory upgrade. For clarity, this is my adventure running through the LangGraph docs.

18 Jul 2025 12:00am GMT

17 Jul 2025

feedPlanet Python

Wingware: Wing Python IDE Version 11.0.2 - July 17, 2025

Wing Python IDE version 11.0.2 is now available. It improves source code analysis, avoids multiple duplicate evaluation of values in the Watch tool, fixes ruff as an external code checker in the Code Warnings tool, and makes a few other minor improvements.

Wing 11 Screen Shot

Downloads

Be sure to Check for Updates in Wing's Help menu after downloading, to make sure that you have the latest hot fixes.

Wing Pro 11.0.1

Wing Personal 11.0.1

Wing 101 11.0.1

Wing 10 and earlier versions are not affected by installation of Wing 11 and may be installed and used independently. However, project files for Wing 10 and earlier are converted when opened by Wing 11 and should be saved under a new name, since Wing 11 projects cannot be opened by older versions of Wing.

New in Wing 11

Improved AI Assisted Development

Wing 11 improves the user interface for AI assisted development by introducing two separate tools AI Coder and AI Chat. AI Coder can be used to write, redesign, or extend code in the current editor. AI Chat can be used to ask about code or iterate in creating a design or new code without directly modifying the code in an editor.

Wing 11's AI assisted development features now support not just OpenAI but also Claude, Grok, Gemini, Perplexity, Mistral, Deepseek, and any other OpenAI completions API compatible AI provider.

This release also improves setting up AI request context, so that both automatically and manually selected and described context items may be paired with an AI request. AI request contexts can now be stored, optionally so they are shared by all projects, and may be used independently with different AI features.

AI requests can now also be stored in the current project or shared with all projects, and Wing comes preconfigured with a set of commonly used requests. In addition to changing code in the current editor, stored requests may create a new untitled file or run instead in AI Chat. Wing 11 also introduces options for changing code within an editor, including replacing code, commenting out code, or starting a diff/merge session to either accept or reject changes.

Wing 11 also supports using AI to generate commit messages based on the changes being committed to a revision control system.

You can now also configure multiple AI providers for easier access to different models.

For details see AI Assisted Development under Wing Manual in Wing 11's Help menu.

Package Management with uv

Wing Pro 11 adds support for the uv package manager in the New Project dialog and the Packages tool.

For details see Project Manager > Creating Projects > Creating Python Environments and Package Manager > Package Management with uv under Wing Manual in Wing 11's Help menu.

Improved Python Code Analysis

Wing 11 improves code analysis of literals such as dicts and sets, parametrized type aliases, typing.Self, type of variables on the def or class line that declares them, generic classes with [...], __all__ in *.pyi files, subscripts in typing.Type and similar, type aliases, and type hints in strings.

Updated Localizations

Wing 11 updates the German, French, and Russian localizations, and introduces a new experimental AI-generated Spanish localization. The Spanish localization and the new AI-generated strings in the French and Russian localizations may be accessed with the new User Interface > Include AI Translated Strings preference.

Improved diff/merge

Wing Pro 11 adds floating buttons directly between the editors to make navigating differences and merging easier, allows undoing previously merged changes, and does a better job managing scratch buffers, scroll locking, and sizing of merged ranges.

For details see Difference and Merge under Wing Manual in Wing 11's Help menu.

Other Minor Features and Improvements

Wing 11 also improves the custom key binding assignment user interface, adds a Files > Auto-Save Files When Wing Loses Focus preference, warns immediately when opening a project with an invalid Python Executable configuration, allows clearing recent menus, expands the set of available special environment variables for project configuration, and makes a number of other bug fixes and usability improvements.

Changes and Incompatibilities

Since Wing 11 replaced the AI tool with AI Coder and AI Chat, and AI configuration is completely different than in Wing 10, you will need to reconfigure your AI integration manually in Wing 11. This is done with Manage AI Providers in the AI menu. After adding the first provider configuration, Wing will set that provider as the default. You can switch between providers with Switch to Provider in the AI menu.

If you have questions, please don't hesitate to contact us at support@wingware.com.

17 Jul 2025 1:00am GMT

16 Jul 2025

feedPlanet Python

Real Python: Python Scope and the LEGB Rule: Resolving Names in Your Code

The scope of a variable in Python determines where in your code that variable is visible and accessible. Python has four general scope levels: local, enclosing, global, and built-in. When searching for a name, Python goes through these scopes in order. It follows the LEGB rule, which stands for Local, Enclosing, Global, and Built-in.

Understanding how Python manages the scope of variables and names is a fundamental skill for you as a Python developer. It helps you avoid unexpected behavior and errors related to name collisions or referencing the wrong variable.

By the end of this tutorial, you'll understand that:

  • A scope in Python defines where a variable is accessible, following the local, enclosing, global, and built-in (LEGB) rule.
  • A namespace is a dictionary that maps names to objects and determines their scope.
  • The four scope levels-local, enclosing, global, and built-in-each control variable visibility in a specific context.
  • Common scope-related built-in functions include globals() and locals(), which provide access to global and local namespaces.

To get the most out of this tutorial, you should be familiar with Python concepts like variables, functions, inner functions, exception handling, comprehensions, and classes.

Get Your Code: Click here to download the free sample code that you'll use to learn about Python scope and the LEGB rule.

Understanding the Concept of Scope

In programming, the scope of a name defines the region of a program where you can unambiguously access that name, which could identify a variable, constant, function, class, or any other object. In most cases, you'll only be able to access a name within its own scope or from an inner or nested scope.

Nearly all programming languages use the concept of scope to avoid name collisions and unpredictable behavior. Most often, you'll distinguish between two main types of scope:

  1. Global scope: Names in this scope are available to all your code.
  2. Local scope: Names in this scope are only available or visible to the code within the scope.

Scope came about because early programming languages like BASIC only had global names. With this type of name, any part of the program could modify any variable at any time, making large programs difficult to maintain and debug. To work with global names, you'd need to keep all the code in mind to know what value a given name refers to at any time. This is a major side effect of not having scopes and relying solely on global names.

Modern languages, like Python, use the concept of variable scoping to avoid this kind of issue. When you use a language that implements scopes, you won't be able to access all the names in a program from all locations. Instead, your ability to access a name depends on its scope.

Note: In this tutorial, you'll be using the term name to refer to the identifiers of variables, constants, functions, classes, or any other object that can be assigned a name.

The names in your programs take on the scope of the code block in which you define them. When you can access a name from somewhere in your code, then the name is in scope. If you can't access the name, then the name is out of scope.

Names and Scopes in Python

Because Python is a dynamically-typed language, its variables come into existence when you first assign them a value. Similarly, functions and classes are available after you define them using def or class, respectively. Finally, modules exist after you import them into your current scope.

You can create names in Python using any of the following operations:

Operation Example
Assignment variable = value
Import import module or from module import name
Function definition def func(): pass
Function argument func(value1, value2,..., valueN)
Class definition class DemoClass: pass

These are all ways to assign a value to either a variable, constant, function, class, instance, or module. In each case, you end up with a name that has a specific scope. This scope will depend on where in your code you've defined the name at hand.

Note: There's an important difference between assignment operations and reference or access operations. When you assign a name, you're either creating that name or making it reference a different object. When you reference a name, you're retrieving the value that the name points to.

Python uses the location of a name definition to associate it with a particular scope. In other words, the place in which you define a name in your code determines the scope or visibility of that name.

For example, if you define a name inside a function, then that name will have a local scope. You can only access the name locally within the function implementation. In contrast, if you define a name at the top level of a module, then that name will have a global scope. You'll be able to access it from anywhere in your code.

Scope vs Namespace in Python

The concept of scope is closely related to the concept of namespace. A scope determines the visibility and lifetime of names, while a namespace provides the place where those names are stored.

Python implements namespaces as dictionaries that map names to objects. These dictionaries are the underlying mechanism that Python uses to store names under a specific scope. You can often access them through the .__dict__ attribute of the owning object.

Read the full article at https://realpython.com/python-scope-legb-rule/ »


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

16 Jul 2025 2:00pm GMT

Mike Driscoll: An Intro to Asciimatics – Another Python TUI Package

Text-based user interfaces (TUIs) have gained significant popularity in recent years. Even Rust has its own library called Ratatui after all. Python has several different TUI packages to choose from. One of those packages is called Asciimatics.

While Asciimatics is not as full-featured and slick as Textual is, you can do quite a bit with Asciimatics. In fact, there is a special kind of charm to the old-school flavor of the TUIs that you can create using Asciimatics.

In this tutorial, you will learn the basics of Asciimatics:

The purpose of this tutorial is not to be exhaustive, but to give you a sense of how easy it is to create a user interface with Asciimatics. Be sure to read the complete documentation and check out their examples to learn more.

For now, let's get started!

Installation

Asciimatics is a third-party Python package. What that means is that Asciimatics is not included with Python. You will need to install it. You should use a Python virtual environment for installing packages or creating new applications.

Whether you use the virtual environment or not, you can use pip to install Asciimatics:

python -m pip install asciimatics

Once Asciimatics is installed, you can proceed to creating a Hello World application.

Creating a Hello World Application

Creating a simple application is a concrete way to learn how to use an unfamiliar package. You will create a fun little application that "prints" out "Hello from Asciimatics" multiple times and in multiple colors.

Open up your favorite Python IDE or text editor and create a new file called hello_asciimatics.py and then add the following code to it:

from random import randint
from asciimatics.screen import Screen

def hello(screen: Screen):
    while True:
        screen.print_at("Hello from ASCIIMatics",
                        randint(0, screen.width), randint(0, screen.height),
                        colour=randint(0, screen.colours - 1),
                        bg=randint(0, screen.colours - 1)
                        )
        key = screen.get_key()
        if key in (ord("Q"), ord("q")):
            return
        screen.refresh()

Screen.wrapper(hello)

This codfe takes in an Asciimatics Screen object. You draw your text on the screen. In this case, you use the screen's print_at() method to draw the text. You use Python's handy random module to choose random coordinates in your terminal to draw the text as well as choose random foreground and background colors.

You run this inside an infinite loop. Since the loop runs indefinitely, the text will be drawn all over the screen and over the top of previous iterations of the text. What that means is that you should see the same text over and over again, getting written on top of previous versions of the text.

If the user presses the "Q" button on their keyboard, the application will break out of the loop and exit.

When you run this code, you should see something like this:

Hello Asciimatics

Isn't that neat? Give it a try on your machine and verify that it works.

Now you are ready to create a form!

Creating a Form

When you want to ask the user for some information, you will usually use a form. You will find that this is true in web, mobile and desktop applications.

To make this work in Asciimatics, you will need to create a way to organize your widgets. To do that, you create a Layoutobject. You will find that Asciimatics follow an hierarchy of Screen -> Scene -> Effects and then layouts and widgets.

All of this is kind of abstract though. So it make this easier to understand, you will write some code. Open up your Python IDE and create another new file. Name this new file ascii_form.pyand then add this code to it:

import sys

from asciimatics.exceptions import StopApplication
from asciimatics.scene import Scene
from asciimatics.screen import Screen
from asciimatics.widgets import Frame, Button, Layout, Text

class Form(Frame):
    def __init__(self, screen):
        super().__init__(screen,
                         screen.height * 2 // 3,
                         screen.width * 2 // 3,
                         hover_focus=True,
                         can_scroll=False,
                         title="Contact Details",
                         reduce_cpu=True)
        layout = Layout([100], fill_frame=True)
        self.add_layout(layout)

        layout.add_widget(Text("Name:", "name"))
        layout.add_widget(Text("Address:", "address"))
        layout.add_widget(Text("Phone number:", "phone"))
        layout.add_widget(Text("Email address:", "email"))

        button_layout = Layout([1, 1, 1, 1])
        self.add_layout(button_layout)
        button_layout.add_widget(Button("OK", self.on_ok), 0)
        button_layout.add_widget(Button("Cancel", self.on_cancel), 3)
        self.fix()

    def on_ok(self):
        print("User pressed OK")

    def on_cancel(self):
        sys.exit(0)
        raise StopApplication("User pressed cancel. Quitting!")


def main(screen: Screen):
    while True:
        scenes = [
            Scene([Form(screen)], -1, name="Main Form")
        ]
        screen.play(scenes, stop_on_resize=True, start_scene=scenes[0], allow_int=True)

Screen.wrapper(main, catch_interrupt=True)

The Form is a subclass of Frame which is an Effect in Asciimatics. In this case, you can think of the frame as a kind of window or dialog within your terminal.

The frame will contain your form. Within the frame, you create a Layoutobject and you tell it to fill the frame. Next you add the widgets to the layout, which will add the widgets vertically, from top to bottom.

Then you create a second layout to hold two buttons: "OK" and "Cancel". The second layout is defined as having four columns with a size of one. You will then add the buttons and specify which column the button should be put in.

To show the frame to the user, you add the frame to a Scene and then you play() it.

When you run this code, you should see something like the following:

Asciimatics form example

Pretty neat, eh?

Now this example is great for demonstrating how to create a more complex user interface, but it doesn't show how to get the data from the user as you haven't written any code to grab the contents of the Text widgets. However, you did show that when you created the buttons, you can bind them to specific methods that get called when the user clicks on those buttons.

Wrapping Up

Asciimatics makes creating simple and complex applications for your terminal easy. However, the applications have a distincly retro-look to them that is reminiscent to the 1980's or even earlier. The applications are appealing in their own way, though.

This tutorial only scratches the surface of Asciimatics. For full details, you should check out their documentation.

If you wamt to create a more modern looking user interface, you might want to check out Textual instead.

Related Reading

Want to learn how to create TUIs the modern way? Check out my book: Creating TUI Applications with Textual and Python.

Creating TUI Applications with Textual and Python

Available at the following:

The post An Intro to Asciimatics - Another Python TUI Package appeared first on Mouse Vs Python.

16 Jul 2025 12:02pm GMT

Python Software Foundation: Affirm Your PSF Membership Voting Status

Every PSF voting-eligible Member (Supporting, Contributing, and Fellow) needs to affirm their membership to vote in this year's election.

If you wish to vote in this year's PSF Board election, you must affirm your intention to vote no later than Tuesday, August 26th, 2:00 pm UTC. This year's Board Election vote begins Tuesday, September 2nd, 2:00 pm UTC, and closes on Tuesday, September 16th, 2:00 pm UTC.

You should have received an email from "psf@psfmember.org <Python Software Foundation>" with the subject "[Action Required] Affirm your PSF Membership voting intention for 2025 PSF Board Election" that contains information on how to affirm your voting status. If you were expecting to receive the email but have not (make sure to check your spam!), please email psf-elections@pyfound.org, and we'll assist you. Please note: If you opted out of emails related to your membership, you did not receive this email.

Need to check your membership status?

Log on to psfmember.org and visit your PSF Member User Information page to see your membership record and status. If you are a voting-eligible member (active Supporting, Contributing, and Fellow members of the PSF) and do not already have a login, please create an account on psfmember.org and then email psf-elections@pyfound.org so we can link your membership to your account. Please ensure you have an account linked to your membership so that we can have the most up-to-date contact information for you in the future.

How to affirm your intention to vote

You can affirm your voting intention by following the steps in our video tutorial:

PSF Bylaws

Section 4.2 of the PSF Bylaws requires that "Members of any membership class with voting rights must affirm each year to the corporation in writing that such member intends to be a voting member for such year."

Our motivation is to ensure that our elections can meet quorum as required by Section 3.9 of our bylaws. As our membership has grown, we have seen that an increasing number of Contributing and Fellow members with indefinite membership do not engage with our annual election, making quorum difficult to reach.

An election that does not reach quorum is invalid. This would cause the whole voting process to be re-held, resulting in fewer voters and an undue amount of effort on the part of PSF Staff.

Recent updates to membership and voting

If you were formerly a Managing member, your membership has been updated to Contributing as of June 25th, 2025, per last year's Bylaw change that merged Managing and Contributing memberships.

Per another recent Bylaw change that allows for simplifying the voter affirmation process by treating past voting activity as intent to continue voting, if you voted last year, you will automatically be added to the 2025 voter roll. Please note: If you removed or changed your email on psfmember.org, you may not automatically be added to this year's voter roll.

What happens next?

You'll get an email from OpaVote with a ballot on or right before September 2nd, and then you can vote!

Check out our PSF Membership page to learn more. If you have questions about membership, nominations, or this year's Board election, please email psf-elections@pyfound.org or join the PSF Discord for the upcoming Board Office Hours on August 12th, 9 PM UTC. You are also welcome to join the discussion about the PSF Board election on our forum.

16 Jul 2025 8:43am GMT

15 Jul 2025

feedPlanet Python

PyCoder’s Weekly: Issue #690: JIT, __init__, dis, and That's Not It (July 15, 2025)

#690 - JULY 15, 2025
View in Browser »

The PyCoder’s Weekly Logo


Reflections on 2 Years of CPython's JIT Compiler

Ken is one of the contributors to CPython's JIT compiler. This retrospective talks about what is going well and what Ken thinks could be better with the JIT.
KEN JIN

What Is Python's __init__.py For?

Learn to declare packages with Python's __init__.py, set package variables, simplify imports, and understand what happens if this module is missing.
REAL PYTHON

Quiz: What Is Python's __init__.py For?

REAL PYTHON

[Live Event] Debugging AI Applications with Sentry

alt

Join the Sentry team for the latest Sentry Build workshop on Debugging with Sentry AI using Seer, MCP, and Agent Monitoring. In this hands-on session, you'll learn how to debug AI-integrated applications and agents with full-stack visibility. Join live on July 23rd →
SENTRY sponsor

Disassembling Python Code Using the dis Module

Look behind the scenes to see what happens when you run your Python (CPython) code by using the tools in the dis module.
THEPYTHONCODINGSTACK.COM

PyData London 2025 Videos

YOUTUBE.COM

Python 3.14.0b4 Released

PYTHON.ORG

PEP 734: Multiple Interpreters in the Stdlib (Final)

PYTHON.ORG

PEP 792: Project Status Markers in the Simple Index (Accepted)

PYTHON.ORG

Articles & Tutorials

Run Coverage on Tests

Code coverage tools tell you just what parts of your programs got executed during test runs. They're an important part of your test suite, without them you may miss errors in your tests themselves. This post has two quick examples of just why you should use a coverage tool.
HUGO VAN KEMENADE

Python Software Foundation Bylaws Change

To comply with a variety of data privacy laws in the EU, UK, and California, the PSF is updating section 3.8 of the bylaws which formerly allowed any voting member to request a list of all members' names and email addresses.
PYTHON SOFTWARE FOUNDATION

Happy 20th Birthday Django!

July 13th was the 20th anniversary of the first public commit to the Django code repository. In celebration, Simon has reposted his talk from the 10th anniversary on the history of the project.
SIMON WILLISON

330× Faster: Four Different Ways to Speed Up Your Code

There are many approaches to speeding up Python code; applying multiple approaches can make your code even faster. This post talks about four different ways you can achieve speed-up.
ITAMAR TURNER-TRAURING

Thinking About Running for the PSF Board? Let's Talk!

It is that time of year, the PSF board elections are starting. If you're thinking about running or want to know more, consider attending the office hours session on August 12th.
PYTHON SOFTWARE FOUNDATION

How Global Variables Work in Python Bytecode

To better understand how Python handles globals, this article walks through dynamic name resolution, the global store, and how monkey patching works at the bytecode level.
FROMSCRATCHCODE.COM • Shared by Tyler Green

Building a JIT Compiler for CPython

Talk Python To Me interviews Brandt Bucher and they talk about the upcoming JIT compiler for Python and how it is different than JITs in other languages.
KENNEDY & BUCHER podcast

International Travel to DjangoCon US 2025

DjangoCon US is in Chicago on September 8-12. If you're travelling there from outside the US, this article has details that may be helpful to you.
DJANGOCON US

Using DuckDB With Pandas, Parquet, and SQL

Learn about DuckDB's in-process architecture and SQL capabilities which can enhance performance and simplify data handling.
KHUYEN TRAN • Shared by Ben Portz

Exploring Protocols in Python

Learn how Python's protocols improve your use of type hints and static type checkers in this practical video course.
REAL PYTHON course

How to Use MongoDB in Python Flask

This article explores the benefits of MongoDB and how to use it in a Flask application.
FEDERICO TROTTA • Shared by AppSignal

Open Source Security Work Isn't "Special"

Seth gave a keynote talk at the OpenSSF Community Day NA and spoke about how in many open source projects security is thought of in isolation and it can be overwhelming to maintainers. This post from Seth is a summary of the talk and proposes changes to how we approach the security problem in open source.
SETH LARSON

Projects & Code

tika-python: Binding for Apache Tika REST Services

GITHUB.COM/CHRISMATTMANN

pytest-xdist: pytest Plugin for Distributed Testing

GITHUB.COM/PYTEST-DEV

pydoll: Automate Chromium-Based Browsers

GITHUB.COM/AUTOSCRAPE-LABS

django-rq-cron: A Cron Runner Built Atop rq

GITHUB.COM/BUTTONDOWN

PCL: Combine Python and C in One File

GITHUB.COM/HEJHDISS • Shared by Muhammed Shafin P

Events

Weekly Real Python Office Hours Q&A (Virtual)

July 16, 2025
REALPYTHON.COM

PyData Bristol Meetup

July 17, 2025
MEETUP.COM

PyLadies Dublin

July 17, 2025
PYLADIES.COM

Chattanooga Python User Group

July 18 to July 19, 2025
MEETUP.COM

IndyPy X IndyAWS: Python-Powered Cloud

July 22 to July 23, 2025
MEETUP.COM

PyOhio 2025

July 26 to July 28, 2025
PYOHIO.ORG


Happy Pythoning!
This was PyCoder's Weekly Issue #690.
View in Browser »

alt


[ Subscribe to 🐍 PyCoder's Weekly 💌 - Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

15 Jul 2025 7:30pm GMT

Mike Driscoll: Creating TUI Applications with Textual and Python is Released

Learn how to create text-based user interfaces (TUIs) using Python and the amazing Textual package.

Creating TUI Applications with Textual and Python (paperback)

Textual is a rapid application development framework for your terminal or web browser. You can build complex, sophisticated applications in your terminal. While terminal applications are text-based rather than pixel-based, they still provide fantastic user interfaces.

The Textual package allows you to create widgets in your terminal that mimic those used in a web or GUI application.

Creating TUI Applications with Textual and Python is to teach you how to use Textual to make striking applications of your own. The book's first half will teach you everything you need to know to develop a terminal application.

The book's second half has many small applications you will learn how to create. Each chapter also includes challenges to complete to help cement what you learn or give you ideas for continued learning.

Here are some of the applications you will create:

Where to Buy

Creating TUI Applications with Textual and Python

You can purchase Creating TUI Applications with Textual and Python on the following websites:

Calculator

CSV Viewer

CSV Viewer TUI

MP3 Player

MP3 Player TUI

Weather App

Weather TUI

The post Creating TUI Applications with Textual and Python is Released appeared first on Mouse Vs Python.

15 Jul 2025 3:41pm GMT

Ruslan Spivak: Book Notes: The Dark Art of Linear Algebra by Seth Braver — Chapter 1 Review

"Mathematics is the art of reducing any problem to linear algebra." - William Stein

If you've ever looked at a vector and thought, "Just a column of numbers, right?", this chapter will change that. The Dark Art of Linear Algebra (aka DALA) by Seth Braver opens with one of the clearest intros I've read. Not every part clicks on the first pass, but the effort pays off. Paired with the author's videos, this is a strong starting point whether you're learning math for the first time or coming back to it with purpose.



As I wrote in Unlocking AI with Math and [Book Notes] Infinitesimals, Derivatives, and Beer - Full Frontal Calculus (Ch. 1), I'm not learning math to pass a test. I'm learning it to understand the machinery behind AI and robotics, and eventually build machines of my own. (That would be fun, right?)


That goal needs a solid grasp of linear algebra. And it starts with understanding what a vector really is. Not just how to work with vectors algebraically, but how they behave in space and fit into a larger structure.

This chapter helped me sharpen that understanding.


Chapter Notes

What's a Vector?

The book makes it clear that the answer to this question will evolve as you go deeper into linear algebra. But Chapter 1 starts simple: a vector is an arrow. A geometric object. A displacement.

In the video that comes with the chapter, the author even says to forget everything you think you know about vectors. He introduces them geometrically, which makes them feel tangible and helps you see familiar algebraic ideas in a visual, spatial way.

Vector Addition

The book introduces vector addition visually. Once you see vectors as displacements or moves through space, the addition feels natural. Almost obvious.

Image source: DALA Ch1

The text doesn't focus on vector subtraction, but there's an exercise on it. The companion video shows two methods. One of them is subtraction by addition: flip the direction of the vector you want to subtract, then add. It reminded me of that Office scene where Andy says "addition by subtraction," and Michael asks, "What does that even mean?" In that context, it's just a throwaway phrase. But in vector math, subtraction by addition is a real method. Flip the vector, then add. If you've done engineering, you've likely seen this before.

Vector addition also follows familiar rules like commutativity and associativity. If those sound fuzzy, the book and video prove them using triangles and parallelograms. No heavy algebra, just geometry.

One nice bonus is that the commutative proof gives you another way to add vectors. Place both tails at the same point, draw a parallelogram, and the diagonal gives the sum. Itss clean and easy to visualize:


Stretching Vectors

Scalar multiplication is introduced as a way to stretch, shrink, or flip a vector, not just multiply its components.

The author even explains where the word scalar comes from. Numbers are called scalars because they scale vectors. I liked that he doesn't assume you already know this.

To stretch a vector, multiply by 3.

To flip it, multiply by -1.

To collapse it, multiply by 0.

It's easier to remember when you learn it by drawing instead of just computing.


Standard Basis Vectors

Only after you've built a solid geometric understanding does the author introduce the standard basis vectors: i, j, and k. By then, it's clear that 2i + 3j + 5k is just a weighted sum of familiar directions.

The chapter shows how to express vectors in ℝ² and ℝ³ using these basis vectors, and how to rewrite them in column form.


Length of Vectors

Be sure to watch the videos that go with this chapter. They walk you through finding the length of a vector visually.

You'll start with the Pythagorean theorem to calculate the length of a vector in ℝ³, then extend the idea to ℝⁿ. The chapter also proves the general length formula when a vector is written in Cartesian coordinates. Neat.


The Dot Product

The chapter defines the dot product using the same geometric approach as earlier sections, and it makes sense. But for me, it really clicked in the physics example where work is defined using the dot product. The author's video made it even clearer.


In the screenshot above, I underlined "Thus we see that work, viewed in a more general setting, is simply a dot product" and scribbled "watch the video" in the margin. Just a reminder that the video is a great companion to the chapter.

The text then walks through key properties: commutativity, dotting a vector with itself, the distributive property, a test for perpendicularity, and how to compute the dot product in ℝ².

You could memorize the formula. But it's much more satisfying to understand the parts and derive it from scratch. Like Einstein said, "Any fool can know. The point is to understand."

Here's a step-by-step derivation, written out in my notes:


Thoughts and Tips

Like Full Frontal Calculus did for derivatives, this chapter tears vectors down to the basics and builds them back up. It does that visually, intuitively, and from first principles. It starts with geometry, not formulas. By the end, it's clear that coordinates are just a way to describe vectors. They are not the vectors themselves.

Verdict: Highly recommend if you want a clear, visual grasp of what vectors really are. Especially if linear algebra has ever felt abstract, dry, or overly symbolic.

If you plan to read the chapter, these tips helped me get the most out of it:

  1. Read slowly. Then read slowly again. The material is clear, but it rewards focused attention. Grab a paperback if you can. Write in the margins. Make the book your own.

  2. Watch the author's YouTube videos. The book explains the idea. The video often makes it stick. If you're reading any of Braver's math books, don't skip the videos. They're short, clear, and worth it.

    1. Vectors (from a geometric perspective)
    2. The Dot Product (from a geometric perspective)
  3. Don't worry about the proofs. They're explained in plain language, supported by visuals, and still rigorous. You don't need a separate book on how to follow them. They just make sense.

  4. Brush up on your trig. Knowing how cosine works pays off when finding angles between vectors. It's a small part of the chapter, but if you're rusty, check out the trig section in Precalculus Made Difficult by the same author.

  5. Do the exercises. The book includes answers, which makes it great for self-study. But like in Full Frontal Calculus, the solutions are compact. Use ChatGPT or Grok (xAI) to expand on them when needed.

  6. Use spaced repetition. For ideas that are hard to keep in memory, try active recall. I use Anki, but any similar tool should work.

  7. Check out the book sample. The author offers a sample on his site. If you're on the fence, it gives you a solid feel for the writing and style.


These pages and videos are exactly what I wish I had the first time I saw vectors. They make the concept click and give you a foundation you can build on, whether you're starting fresh or coming back to review.

More to come. Stay tuned.

Originally published in my newsletter Beyond Basics. If you'd like to get future posts like this by email, you can subscribe here.

P.S. I'm not affiliated with the author. I just really enjoy his books and wanted to share that.

15 Jul 2025 2:38pm GMT

Real Python: Getting Started With marimo Notebooks

marimo notebooks redefine the notebook experience by offering a reactive environment that addresses the limitations of traditional linear notebooks. With marimo, you can seamlessly reproduce and share content while benefiting from automatic cell updates and a correct execution order. Discover how marimo's features make it an ideal tool for documenting research and learning activities.

By the end of this video course, you'll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

15 Jul 2025 2:00pm GMT

Ned Batchelder: 2048: iterators and iterables

I wrote a low-tech terminal-based version of the classic 2048 game and had some interesting difficulties with iterators along the way.

2048 has a 4×4 grid with sliding tiles. Because the tiles can slide left or right and up or down, sometimes we want to loop over the rows and columns from 0 to 3, and sometimes from 3 to 0. My first attempt looked like this:


N = 4
if sliding_right:
cols = range(N-1, -1, -1) # 3 2 1 0
else:
cols = range(N) # 0 1 2 3

if sliding_down:
rows = range(N-1, -1, -1) # 3 2 1 0
else:
rows = range(N) # 0 1 2 3

for row in rows:
for col in cols:
...

This worked, but those counting-down ranges are ugly. Let's make it nicer:


cols = range(N) # 0 1 2 3
if sliding_right:
cols = reversed(cols) # 3 2 1 0

rows = range(N) # 0 1 2 3
if sliding_down:
rows = reversed(rows) # 3 2 1 0

for row in rows:
for col in cols:
...

Looks cleaner, but it doesn't work! Can you see why? It took me a bit of debugging to see the light.

range() produces an iterable: something that can be iterated over. Similar but different is that reversed() produces an iterator: something that is already iterating. Some iterables (like ranges) can be used more than once, creating a new iterator each time. But once an iterator like reversed() has been consumed, it is done. Iterating it again will produce no values.

If "iterable" vs "iterator" is already confusing here's a quick definition: an iterable is something that can be iterated, that can produce values in a particular order. An iterator tracks the state of an iteration in progress. An analogy: the pages of a book are iterable; a bookmark is an iterator. The English hints at it: an iter-able is able to be iterated at some point, an iterator is actively iterating.

The outer loop of my double loop was iterating only once over the rows, so the row iteration was fine whether it was going forward or backward. But the columns were being iterated again for each row. If the columns were going forward, they were a range, a reusable iterable, and everything worked fine.

But if the columns were meant to go backward, they were a one-use-only iterator made by reversed(). The first row would get all the columns, but the other rows would try to iterate using a fully consumed iterator and get nothing.

The simple fix was to use list() to turn my iterator into a reusable iterable:


cols = list(reversed(cols))

The code was slightly less nice, but it worked. An even better fix was to change my doubly nested loop into a single loop:


for row, col in itertools.product(rows, cols):

That also takes care of the original iterator/iterable problem, so I can get rid of that first fix:


cols = range(N)
if sliding_right:
cols = reversed(cols)

rows = range(N)
if sliding_down:
rows = reversed(rows)

for row, col in itertools.product(rows, cols):
...

Once I had this working, I wondered why product() solved the iterator/iterable problem. The docs have a sample Python implementation that shows why: internally, product() is doing just what my list() call did: it makes an explicit iterable from each of the iterables it was passed, then picks values from them to make the pairs. This lets product() accept iterators (like my reversed range) rather than forcing the caller to always pass iterables.

If your head is spinning from all this iterable / iterator / iteration talk, I don't blame you. Just now I said, "it makes an explicit iterable from each of the iterables it was passed." How does that make sense? Well, an iterator is an iterable. So product() can take either a reusable iterable (like a range or a list) or it can take a use-once iterator (like a reversed range). Either way, it populates its own reusable iterables internally.

Python's iteration features are powerful but sometimes require careful thinking to get right. Don't overlook the tools in itertools, and mind your iterators and iterables!

• • •

Some more notes:

1: Another way to reverse a range: you can slice them!


>>> range(4)
range(0, 4)
>>> range(4)[::-1]
range(3, -1, -1)
>>> reversed(range(4))
<range_iterator object at 0x10307cba0>

It didn't occur to me to reverse-slice the range, since reversed is right there, but the slice gives you a new reusable range object while reversing the range gives you a use-once iterator.

2: Why did product() explicitly store the values it would need but reversed did not? Two reasons: first, reversed() depends on the __reversed__ dunder method, so it's up to the original object to decide how to implement it. Ranges know how to produce their values in backward order, so they don't need to store them all. Second, product() is going to need to use the values from each iterable many times and can't depend on the iterables being reusable.

15 Jul 2025 10:52am GMT

death and gravity: Inheritance over composition, sometimes

In Process​Thread​Pool​Executor: when I‍/‍O becomes CPU-bound, we built a hybrid concurrent.​futures executor that runs tasks in multiple threads on all available CPUs, bypassing Python's global interpreter lock.

Here's some interesting reader feedback:

Currently, the code is complex due to subclassing and many layers of delegation. Could this solution be implemented using only functions, no classes? Intuitively I feel classes would be hell to debug.

Since a lot of advanced beginners struggle with structuring code, we'll implement the same executor using inheritance, composition, and functions only, compare the solutions, and reach some interesting conclusions. Consider this a worked example.

Note

Today we're focusing on code structure. While not required, reading the original article will give you a better idea of why the code does what it does.

Requirements #

Before we delve into the code, we should have some understanding of what we're building. The orginal article sets out the following functional requirements:

  1. Implement the Executor interface; we want a drop-in replacement for existing concurrent.​futures executors, so that user code doesn't have to change.
  2. Spread the work to one worker process per CPU, and then further to multiple threads inside each worker, to work around CPU becoming a bottleneck for I‍/‍O.

Additionally, we have two implicit non-functional requirements:

  1. Use the existing executors where possible (less code means fewer bugs).
  2. Only depend on stable, documented features; we don't want our code to break when concurrent.​futures internals change.

concurrent.futures #

Since we're building on top of concurrent.​futures, we should also get familiar with it; the docs already provide a great introduction:

The concurrent.​futures module provides a high-level interface for asynchronously executing callables. [...this] can be performed with threads, using Thread​Pool​Executor, or separate processes, using Process​Pool​Executor. Both implement the same interface, which is defined by the abstract Executor class.

Let's look at the classes in more detail.

Executor is an abstract base class1 defined in concurrent.​futures.​_base. It provides dummy submit() and shutdown() methods, a concrete map() method implemented in terms of submit(), and context manager methods that shutdown() the executor on exit. Notably, the documentation does not mention the concrete methods, instead saying that the class "should not be used directly, but through its concrete subclasses".

The first subclass, Thread​Pool​Executor, is defined in concurrent.​futures.​thread; it implements submit() and shutdown(), inheriting map() unchanged.

The second one, Process​Pool​Executor, is defined in concurrent.​futures.​process; as an optimization, it overrides map() to chop the input iterables and pass the chunks to the superclass method with super().

Three solutions #

Now we're ready for code.

Inheritance #

First, the original implementation,2 arguably a textbook example of inheritance.

We override __init__, submit(), and shutdown(), and do some extra stuff on top of the inherited behavior, which we access through super(). We inherit the context manager methods, map(), and any public methods Process​Pool​Executor may get in the future, assuming they use only other public methods (more on this below).

class ProcessThreadPoolExecutor(concurrent.futures.ProcessPoolExecutor):

    def __init__(self, max_threads=None, initializer=None, initargs=()):
        self.__result_queue = multiprocessing.Queue()
        super().__init__(
            initializer=_init_process,
            initargs=(self.__result_queue, max_threads, initializer, initargs)
        )
        self.__tasks = {}
        self.__result_handler = threading.Thread(target=self.__handle_results)
        self.__result_handler.start()

    def submit(self, fn, *args, **kwargs):
        outer = concurrent.futures.Future()
        task_id = id(outer)
        self.__tasks[task_id] = outer

        outer.set_running_or_notify_cancel()
        inner = super().submit(_submit, task_id, fn, *args, **kwargs)

        return outer

    def __handle_results(self):
        for task_id, ok, result in iter(self.__result_queue.get, None):
            outer = self.__tasks.pop(task_id)
            if ok:
                outer.set_result(result)
            else:
                outer.set_exception(result)

    def shutdown(self, wait=True):
        super().shutdown(wait=wait)
        if self.__result_queue:
            self.__result_queue.put(None)
            if wait:
                self.__result_handler.join()
            self.__result_queue.close()
            self.__result_queue = None

Because we're subclassing a class with private, undocumented attributes, our private attributes have to start with double underscores to avoid clashes with superclass ones (such as _result_queue).

In addition to the main class, there are some global functions used in the worker processes which remain unchanged regardless of the solution:

# this code runs in each worker process

_executor = None
_result_queue = None

def _init_process(queue, max_threads, initializer, initargs):
    global _executor, _result_queue

    _executor = concurrent.futures.ThreadPoolExecutor(max_threads)
    _result_queue = queue

    if initializer:
        initializer(*initargs)

def _submit(task_id, fn, *args, **kwargs):
    task = _executor.submit(fn, *args, **kwargs)
    task.task_id = task_id
    task.add_done_callback(_put_result)

def _put_result(task):
    if exception := task.exception():
        _result_queue.put((task.task_id, False, exception))
    else:
        _result_queue.put((task.task_id, True, task.result()))

Download the entire file.

Composition #

OK, now let's use composition - instead of being a Process​Pool​Executor, our Process​Thread​Pool​Executor has one. At a first glance, the result is the same as before, with super() changed to self._inner:

class ProcessThreadPoolExecutor:

    def __init__(self, max_threads=None, initializer=None, initargs=()):
        self._result_queue = multiprocessing.Queue()
        self._inner = concurrent.futures.ProcessPoolExecutor(
            initializer=_init_process,
            initargs=(self._result_queue, max_threads, initializer, initargs)
        )
        self._tasks = {}
        self._result_handler = threading.Thread(target=self._handle_results)
        self._result_handler.start()

    def submit(self, fn, *args, **kwargs):
        outer = concurrent.futures.Future()
        task_id = id(outer)
        self._tasks[task_id] = outer

        outer.set_running_or_notify_cancel()
        inner = self._inner.submit(_submit, task_id, fn, *args, **kwargs)

        return outer

    def _handle_results(self):
        for task_id, ok, result in iter(self._result_queue.get, None):
            outer = self._tasks.pop(task_id)
            if ok:
                outer.set_result(result)
            else:
                outer.set_exception(result)

    def shutdown(self, wait=True):
        self._inner.shutdown(wait=wait)
        if self._result_queue:
            self._result_queue.put(None)
            if wait:
                self._result_handler.join()
            self._result_queue.close()
            self._result_queue = None

Except, we need to implement the context manager protocol ourselves:

    def __enter__(self):
        # concurrent.futures._base.Executor.__enter__
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        # concurrent.futures._base.Executor.__exit__
        self.shutdown(wait=True)
        return False

...and we need to copy map() from Executor, since it should use our submit():

    def _map(self, fn, *iterables, timeout=None, chunksize=1):
        # concurrent.futures._base.Executor.map

        if timeout is not None:
            end_time = timeout + time.monotonic()

        fs = [self.submit(fn, *args) for args in zip(*iterables)]

        def result_iterator():
            try:
                fs.reverse()
                while fs:
                    if timeout is None:
                        yield _result_or_cancel(fs.pop())
                    else:
                        yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
            finally:
                for future in fs:
                    future.cancel()
        return result_iterator()

...and the chunksize optimization from its Process​Pool​Executor version:

    def map(self, fn, *iterables, timeout=None, chunksize=1):
        # concurrent.futures.process.ProcessPoolExecutor.map

        if chunksize < 1:
            raise ValueError("chunksize must be >= 1.")

        results = self._map(partial(_process_chunk, fn),
                            itertools.batched(zip(*iterables), chunksize),
                            timeout=timeout)
        return _chain_from_iterable_of_lists(results)

...and a bunch of private functions they use.

def _result_or_cancel(fut, timeout=None):
    # concurrent.futures._base._result_or_cancel
    try:
        try:
            return fut.result(timeout)
        finally:
            fut.cancel()
    finally:
        del fut

def _process_chunk(fn, chunk):
    # concurrent.futures.process._process_chunk
    return [fn(*args) for args in chunk]

def _chain_from_iterable_of_lists(iterable):
    # concurrent.futures.process._chain_from_iterable_of_lists
    for element in iterable:
        element.reverse()
        while element:
            yield element.pop()

And, when the Executor interface gets new methods, we'll need to at least forward them to the inner executor, although we may have to copy those too.

On the upside, no base class means we can name attributes however we want.

Download the entire file.


But this is Python, why do we need to copy stuff? In Python, methods are just functions, so we could almost get away with this:

class ProcessThreadPoolExecutor:
    ... # __init__, submit(), and shutdown() just as before
    __enter__ = ProcessPoolExecutor.__enter__
    __exit__ = ProcessPoolExecutor.__exit__
    map = ProcessPoolExecutor.map

Alas, it won't work - Process​Pool​Executor map() calls super().​map(), and object, the superclass of our executor, has no such method, which is why we had to change it to self.​_map() in our copy in the first place.

Functions #

Can this be done using only functions, though?

Theoretically no, since we need to implement the executor interface. Practically yes, since this is Python, where an "interface" just means having specific attributes, usually functions with specific signatures. For example, a module like this:

def init(max_threads=None, initializer=None, initargs=()):
    global _result_queue, _inner, _tasks, _result_handler
    _result_queue = multiprocessing.Queue()
    _inner = concurrent.futures.ProcessPoolExecutor(
        initializer=_init_process,
        initargs=(_result_queue, max_threads, initializer, initargs)
    )
    _tasks = {}
    _result_handler = threading.Thread(target=_handle_results)
    _result_handler.start()

def submit(fn, *args, **kwargs):
    outer = concurrent.futures.Future()
    task_id = id(outer)
    _tasks[task_id] = outer

    outer.set_running_or_notify_cancel()
    inner = _inner.submit(_submit, task_id, fn, *args, **kwargs)

    return outer

def _handle_results():
    for task_id, ok, result in iter(_result_queue.get, None):
        outer = _tasks.pop(task_id)
        if ok:
            outer.set_result(result)
        else:
            outer.set_exception(result)

def shutdown(wait=True):
    global _result_queue
    _inner.shutdown(wait=wait)
    if _result_queue:
        _result_queue.put(None)
        if wait:
            _result_handler.join()
        _result_queue.close()
        _result_queue = None

Like before, we need to copy map() with minor tweaks.

def _map(fn, *iterables, timeout=None, chunksize=1):
    # concurrent.futures._base.Executor.map

    if timeout is not None:
        end_time = timeout + time.monotonic()

    fs = [submit(fn, *args) for args in zip(*iterables)]

    def result_iterator():
        try:
            fs.reverse()
            while fs:
                if timeout is None:
                    yield _result_or_cancel(fs.pop())
                else:
                    yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
        finally:
            for future in fs:
                future.cancel()
    return result_iterator()

def map(fn, *iterables, timeout=None, chunksize=1):
    # concurrent.futures.process.ProcessPoolExecutor.map

    if chunksize < 1:
        raise ValueError("chunksize must be >= 1.")

    results = _map(partial(_process_chunk, fn),
                   itertools.batched(zip(*iterables), chunksize),
                   timeout=timeout)
    return _chain_from_iterable_of_lists(results)

Behold, we can use the module itself as an executor:

>>> ptpe.init()
>>> ptpe.submit(int, '1').result()
1

Of note, everything that was an instance variable before is now a global variable; as a consequence, only one executor can exist at any given time, since there's only the one module.3 But it gets worse - calling init() a second time will clobber the state of the first executor, leading to all sorts of bugs; if we were serious, we'd prevent it somehow.

Also, some interfaces are more complicated than having the right functions; defining __enter__ and __exit__ is not enough to use a module in a with statement, since the interpreter looks them up on the class of the object, not on the object itself. We can work around this with an alternate "constructor" that returns a context manager:

@contextmanager
def init_cm(*args, **kwargs):
    init(*args, **kwargs)
    try:
        yield sys.modules[__name__]
    finally:
        shutdown()
>>> with ptpe.init_cm() as executor:
...     assert executor is ptpe
...     ptpe.submit(int, '2').result()
...
2

Download the entire file.

Liking this so far? Here's another article you might like:

Comparison #

So, how do the solutions stack up? Here's a summary:

pros cons
inheritance
  • least amount of code
  • inherits new high level methods for free
  • assumes inherited high level methods use only the public API
  • attribute names have to start with double underscores (minor)
composition
  • attributes can have any name (minor)
  • copies lots of code
  • must be kept in sync with the interface
functions ?
  • copies lots of code
  • must be kept in sync with the interface
  • only one global executor at a time
  • state is harder to discover
  • alternate "constructor" to use as context manager (minor)

I may be a bit biased, but inheritance looks like a clear winner.

Composition over inheritance #

Given that favoring composition over inheritance is usually a good practice, it's worth discussing why inheritance won this time. I see three reasons:

  1. Composition helps most when you have unrelated components that need to be flexible in response to an evolving business domain; that's not the case here, so we get all the drawbacks with none of the benefits.
  2. The existing code is designed for inheritance.
  3. We have a true is-a relationship - Process​Thread​Pool​Executor really is a Process​Pool​Executor with extra behavior, and not just part of an arbitrary hierarchy.

For a different line of reasoning involving subtyping, check out Hillel Wayne's When to prefer inheritance to composition; he offers this rule of thumb:

So, here's when you want to use inheritance: when you need to instantiate both the parent and child classes and pass them to the same functions.

Forward compatibility #

The inheritance solution assumes map() and any future public Process​Pool​Executor methods are implemented only in terms of other public methods. This assumption introduces a risk that updates may break our executor; this is lowered by two things:

  1. concurrent.​futures is in the standard library, which rarely does major rewrites of existing code, and never within a minor (X.Y) version; concurrent.​futures exists in its current form since Python 3.2, released in 2011.
  2. concurrent.​futures is clearly designed for inheritance, even if mainly to enable internal reuse, and not explicitly documented.

As active mitigations, we can add a basic test suite (which we should do anyway), and document the supported Python versions explicitly (which we should do anyway if we were to release this on PyPI).

If concurrent.​futures were not in the standard library, I'd probably go with the composition version instead, although as already mentioned, this wouldn't be free from upkeep either. Another option would be to upstream Process​Thread​Pool​Executor, so that it is maintained together with the code it depends on.

Global state #

The functions-only solution is probably the worst of the three, since it has all the downsides of composition, and significant limitations due to its use of global state.

We could avoid using globals by passing the state (process pool executor instance, result queue, etc.) as function arguments, but this breaks the executor interface, and makes for an awful user experience. We could group common arguments into a single object so there's only one argument to pass around; if you call that argument self, it becomes obvious that's just a class instance with extra steps.

Having to keep track of a bunch of related globals has enough downsides that even if you do want a module-level API, it's still worth using a class to group them, and exposing the methods of a global instance at module-level (like so); Brandon Rhodes discusses this at length in The Prebound Method Pattern.

Complexity #

While the code is somewhat complex, that's mostly intrinsic to the problem itself (what runs in the main vs. worker processes, passing results around, error handling, and so on), rather than due to our of use classes, which only affects how we refer to Process​Pool​Executor methods and how we store state.

One could argue that copying a bunch of code doesn't increase complexity, but if you factor in keeping it up to date and tested, it's not exactly free either.

One could also argue that building our executor on top of Process​Pool​Executor is increasing complexity, and in a way that's true - for example, we have two result queues and had to deal with dead workers too, which wouldn't be the case if we wrote it from scratch; but in turn, that would come with having to understand, maintain, and test 800+ lines of code of low level process management code. Sometimes, complexity I have to care about is more important that total complexity.

Debugging #

I have to come clean at this point - I use print debugging a lot 🙀 (especially if there are no tests yet, and sometimes from tests too); when that doesn't cut it, IPython's embed() usually provides enough interactivity to figure out what's going on.4

With the minimal test at the end of the file driving the executor, I used temporary print() calls in _submit(), _put_result(), and __handle_results() to check data is making its way through properly; if I expected the code to change more often, I'd replace them with permanent logging calls.

In addition, there were two debugging scripts in the benchmark file that I didn't show, one to automate killing workers at the right time, and one to make sure shutdown() waits any pending tasks.

So, does how we wrote the code change any of this? Not really, no; all the techniques above (and using a debugger too) apply equally well. If anything, using classes makes interactive debugging easier, since it's easier to discover state via autocomplete (with functions only, you have to know to look it up on the module).

Try it out #

As I've said before, try it out - it only took ~10 minutes to convert the initial solution to the other two. In part, the right code structure is a matter feeling and taste, and both are educated by reading and writing lots of code. If you think there's a better way to do something, do it and see how it looks; it is a sort of deliberate practice.

Learned something new today? Share this with others, it really helps!

Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!

  1. Executor is an abstract base class only by convention: it is a base class (other classes are supposed to subclass it), and it is abstract (other classes are supposed to provide concrete implementations for some methods).

    Python also allows formalizing abstract base classes using the abc module; see When to use classes in Python? When you repeat similar sets of functions for an example of this and other ways of achieving the same goal. [return]

  2. For brevity, I'm using the version before dealing with dead workers; the final code is similar, but with a more involved __handle_results. [return]

  3. This is almost true - we could "this is Python" our way deeper and reload the module while still keeping a reference to the old one, but that's just a round-about, unholy way of emulating class instances. [return]

  4. Pro tip: you can use embed() as a breakpoint() hook: PYTHONBREAKPOINT=IPython.embed python myscript.py. [return]

15 Jul 2025 10:43am GMT

Python Bytes: #440 Can't Register for VibeCon

<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://treyhunner.com/2024/10/switching-from-virtualenvwrapper-to-direnv-starship-and-uv/?featured_on=pythonbytes">Switching to direnv, Starship, and uv</a></em>*</li> <li><em>* <a href="https://rqlite.io?featured_on=pythonbytes">rqlite - Distributed SQLite DB</a></em>*</li> <li><em>* Some Markdown Stuff</em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=AXcQsRZRd8k' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="440">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by PropelAuth: <a href="https://pythonbytes.fm/propelauth77">pythonbytes.fm/propelauth77</a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://treyhunner.com/2024/10/switching-from-virtualenvwrapper-to-direnv-starship-and-uv/?featured_on=pythonbytes">Switching to direnv, Starship, and uv</a></strong></p> <ul> <li><p>Last week I mentioned that I'm ready to try direnv again, but secretly, I still had some worries about the process. Thankfully, Trey has a tutorial to walk me past the troublesome parts.</p></li> <li><p><a href="https://direnv.net?featured_on=pythonbytes">direnv</a> - an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory.</p></li> <li><p>Switching from virtualenvwrapper to direnv, Starship, and uv</p> <p>- Trey Hunner**</p> <ul> <li><p>Trey has solved a bunch of the problems I had when I tried direnv before</p> <ul> <li><p>Show the virtual environment name in the prompt</p></li> <li><p>Place new virtual environments in local <code>.venv</code> instead of in <code>.direnv/python3.12</code></p></li> <li><p>Silence all of the "loading", "unloading" statements every time you enter a directory</p></li> <li><p>Have a script called </p> <pre><code>venv </code></pre> <p>to create an environment, activate it, create a </p> <pre><code>.envrc </code></pre> <p>file</p> <ul> <li>I'm more used to a <code>create</code> script, so I'll stick with that name and Trey's contents</li> </ul></li> <li><p>A </p> <pre><code>workon </code></pre> <p>script to be able to switch around to different projects.</p> <ul> <li>This is a carry over from "virtualenvwrapper', but seems cool. I'll take it.</li> </ul></li> <li><p>Adding </p> <pre><code>uv </code></pre> <p>to the mix for creating virtual environments.</p> <ul> <li>Interestingly including <code>--seed</code> which, for one, installs <code>pip</code> in the new environment. (Some tools need it, even if you don't)</li> </ul></li> </ul></li> <li><p>Starship</p> <ul> <li>Trey also has some setup for Starship. But I'll get through the above first, then MAYBE try Starship again.</li> <li>Some motivation <ul> <li>Trey's setup is pretty simple. Maybe I was trying to get too fancy before</li> <li>Starship config in toml files that can be loaded with direnv and be different for different projects. Neato</li> <li>Also, Trey mentions his dotfiles repo. This is a cool idea that I've been meaning to do for a long time.</li> </ul></li> </ul></li> </ul></li> <li><p>See also:</p> <ul> <li><a href="https://www.pythonbynight.com/blog/terminal?featured_on=pythonbytes">It's Terminal - Bootstrapping With Starship, Just, Direnv, and UV</a> - Mario Munoz</li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://rqlite.io?featured_on=pythonbytes">rqlite - Distributed SQLite DB</a></strong></p> <ul> <li><a href="https://fosstodon.org/@themlu/114852806589871969">via themlu, thanks</a>!</li> <li>rqlite is a lightweight, user-friendly, distributed relational database built on SQLite.</li> <li>Built on SQLite, the world's most popular database</li> <li>Supports full-text search, Vector Search, and JSON documents</li> <li>Access controls and encryption for secure deployments</li> </ul> <p><strong>Michael #3</strong>: <a href="https://www.peterbe.com/plog/a-python-dict-that-can-report-which-keys-you-did-not-use?featured_on=pythonbytes">A Python dict that can report which keys you did not use</a></p> <ul> <li>by Peter Bengtsson</li> <li>Very cool for testing that a dictionary has been used as expected (e.g. all data has been sent out via an API or report).</li> <li>Note: It does NOT track d.get(), but it's easy to just add it to the class in the post.</li> <li>Maybe someone should polish it up and put it on pypi (that person is not me :) ).</li> </ul> <p><strong>Brian #4: Some Markdown Stuff</strong></p> <ul> <li><p>Textual 4.0.0</p> <p>adds Markdown.append which can be used to efficiently stream markdown content</p> <ul> <li>The reason for the major bump is due to an interface change to Widget.anchor</li> <li>Refreshing to see a symantic change cause a major version bump.</li> </ul></li> <li><p>html-to-markdown</p> <ul> <li><p>Converts html to markdown</p></li> <li><p>A complete rewrite fork of markdownify</p></li> <li>Lots of fun features like "streaming support" <ul> <li>Curious if it can stream to Textual's Markdown.append method. hmmm.</li> </ul></li> </ul></li> </ul> <p><strong>Joke: <a href="https://www.reddit.com/r/programminghumor/comments/1ko7ube/vibecon/?featured_on=pythonbytes">Vibecon is hard to attend</a></strong></p>

15 Jul 2025 8:00am GMT

Programiz: Getting Started with Python

In this tutorial, you will learn to write your first Python program.

15 Jul 2025 7:45am GMT

Seth Michael Larson: Email has algorithmic curation, too

Communication technologies should optimally be reliable, especially when both parties have opted-in to consistent reliable delivery. I don't want someone else to decide whether I receive a text message or email from a friend.

I associate "algorithmic curation" with social media platforms like TikTok, YouTube, Twitter, or Instagram. I don't typically think about email as a communication technology that contains algorithmic curation. Maybe that thinking should change?

Email for most people has algorithmic curation applied by their email provider. Email providers like Gmail automatically filter the email and decide which "category" the email ends up in, regardless of how much you trust the sender or if you have opted-in to their emails. Some of these categories are harmless, like "Social", where social media updates will be filtered into its own category but not hidden in any meaningful way.

The category that is destructive is one we know and love: "Spam". Spam filtering is usually a good thing, if you've ever looked in the folder you understand why it exists. However, many email providers don't give a way to opt-out of spam filtering, even for senders that have sent you hundreds of high-quality opted-in emails.

Where this is relevant is for email newsletters. I publish an email newsletter for this blog, and yet I would prefer you not use the newsletter and instead use RSS. If you enjoy the blog's content enough to get a notification when there's more, then you probably want delivery to be reliable.

My previous email was sent to the Spam folder for at least Gmail, and from reading the email I am not sure why this would be the case. The language isn't any different from the rest of my emails, and yet the number of deliveries and opens is less than half of a typical email.

As someone trying to communicate to readers, what am I supposed to learn or do in this situation? Just like with other algorithmically curated platforms, I feel like I'm at the mercy of a process that isn't understandable and prone to change without warning.

Reliable communication technologies like RSS are the answer. If you're a regular consumer of internet content I highly recommend installing an RSS feed reader. My personal recommendation (that I use and pay for) is Inoreader. You'd be surprised which platforms offer RSS as a reliable alternative to their typical curation approach, for example YouTube offers RSS feeds for channels.

As a web surfer I hope this article inspires you to choose a reliable communication technology like RSS when "subscribing" to internet creatives so you never miss another publication. If you're a publisher, providing your content through a reliable opt-in medium like RSS, Patreon, or even Discord means only you and your readers are in control of who sees your content.

15 Jul 2025 12:00am GMT

14 Jul 2025

feedPlanet Python

Real Python: How to Debug Common Python Errors

Python debugging involves identifying and fixing errors in your code using tools like tracebacks, print() calls, breakpoints, and tests. In this tutorial, you'll learn how to interpret error messages, use print() to track variable values, and set breakpoints to pause execution and inspect your code's behavior. You'll also explore how writing tests can help prevent errors and ensure your code runs as expected.

By the end of this tutorial, you'll understand that:

  • Debugging means identifying, analyzing, and resolving issues in your Python code using systematic approaches.
  • Tracebacks are messages that help you pinpoint where errors occur in your code, allowing you to resolve them effectively.
  • Using print() helps you track variable values and understand code flow, aiding in error identification.
  • Breakpoints let you pause code execution to inspect and debug specific parts, improving error detection.
  • Writing and running tests before or during development aids in catching errors early and ensures code reliability.

Understanding these debugging techniques will empower you to handle Python errors confidently and maintain efficient code.

Get Your Code: Click here to download the free sample code that shows you how to debug common Python errors.

Take the Quiz: Test your knowledge with our interactive "How to Debug Common Python Errors" quiz. You'll receive a score upon completion to help you track your learning progress:


How to Debug Common Python Errors

Interactive Quiz

How to Debug Common Python Errors

Take this quiz to review core Python debugging techniques like reading tracebacks, using print(), and setting breakpoints to find and fix errors.

How to Get Started With Debugging in Python

Debugging means to unravel what is sometimes hidden. It's the process of identifying, analyzing, and resolving issues, errors, or bugs in your code.

At its core, debugging involves systematically examining code to determine the root cause of a problem and implementing fixes to ensure the program functions as intended. Debugging is an essential skill for you to develop.

Debugging often involves using tools and techniques such as breakpoints, logging, and tests to achieve error-free and optimized performance of your code. In simpler terms, to debug is to dig through your code and error messages in an attempt to find the source of the problem, and then come up with a solution to the problem.

Say you have the following code:

Python cat.py
print(cat)
Copied!

The code that prints the variable cat is saved in a file called cat.py. If you try to run the file, then you'll get a traceback error saying that it can't find the definition for the variable named cat:

Shell
$ python cat.py
Traceback (most recent call last):
  File "/path_to_your_file/cat.py", line 1, in <module>
    print(cat)
          ^^^
NameError: name 'cat' is not defined
Copied!

When Python encounters an error during execution, it prints a traceback, which is a detailed message that shows where the problem occurred in your code. In this example, the variable named cat can't be found because it hasn't been defined.

Here's what each part of this Python traceback means:

Part Explanation
Traceback (most recent call last) A generic message sent by Python to notify you of a problem with your code.
File "/path_to_your_file/cat.py" This points to the file where the error originated.
line 1, in <module> Tells you the exact line in the file where the error occurred.
print(cat) Shows you the line of Python code that caused the error.
NameError Tells you the kind of error it is. In this example, you have a NameError.
name 'cat' is not defined This is the specific error message that tells you a bit more about what's wrong with the piece of code.

In this example, the Python interpreter can't find any prior definition of the variable cat and therefore can't provide a value when you call print(cat). This is a common Python error that can happen when you forget to define variables with initial values.

To fix this error, you'll need to take a step-by-step approach by reading the error message, identifying the problem, and testing solutions until you find one that works.

In this case, the solution would be to assign a value to the variable cat before the print call. Here's an example:

Python cat.py
cat = "Siamese"
print(cat)
Copied!

Notice that the error message disappears when you rerun your program, and the following output is printed:

Shell
$ python cat.py
Siamese
Copied!

The text string stored in cat is printed as the code output. With this error resolved, you're well on your way to quickly debugging errors in Python.

In the next sections, you'll explore other approaches to debugging, but first, you'll take a closer look at using tracebacks.

Read the full article at https://realpython.com/debug-python-errors/ »


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

14 Jul 2025 2:00pm GMT

Real Python: Quiz: How to Debug Common Python Errors

In this quiz, you'll test your understanding of How to Debug Common Python Errors.

Debugging means identifying, analyzing, and resolving issues in your Python code. You'll revisit reading tracebacks, using print() for value tracking, setting breakpoints to pause execution, and writing tests to catch errors. Good luck!


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

14 Jul 2025 12:00pm GMT