16 Sep 2024

feedDZone Java Zone

Selenium Grid Tutorial: Essential Tips and How To Set It Up

Being a tester or a skilled web developer, you need to test your applications for bugs and performance in all the available browsers and operating systems. But with so many dependencies in hand, having not just different browsers, but different versions, too, it surely becomes a hefty task.

Most importantly, all these processes must be automated to the greatest extent because in major companies, individually creating test cases and pipelines would be expensive and the most obvious, the least preferable method ever. Now this is where Selenium Grid, an extensively used server-based test automation tool comes into the picture.

16 Sep 2024 3:00pm GMT

feedPlanet Python

Real Python: Using Python's pip to Manage Your Projects' Dependencies

The standard package manager for Python is pip. It allows you to install and manage packages that aren't part of the Python standard library. If you're looking for an introduction to pip, then you've come to the right place!

In this tutorial, you'll learn how to:

  • Set up pip in your working environment
  • Fix common errors related to working with pip
  • Install and uninstall packages with pip
  • Manage projects' dependencies using requirements files

You can do a lot with pip, but the Python community is very active and has created some neat alternatives to pip. You'll learn about those later in this tutorial.

Get Your Cheat Sheet: Click here to download a free pip cheat sheet that summarizes the most important pip commands.

Getting Started With pip

So, what exactly does pip do? pip is a package manager for Python. That means it's a tool that allows you to install and manage libraries and dependencies that aren't distributed as part of the standard library. The name pip was introduced by Ian Bicking in 2008:

I've finished renaming pyinstall to its new name: pip. The name pip is [an] acronym and declaration: pip installs packages. (Source)

Package management is so important that Python's installers have included pip since versions 3.4 and 2.7.9, for Python 3 and Python 2, respectively. Many Python projects use pip, which makes it an essential tool for every Pythonista.

The concept of a package manager might be familiar to you if you're coming from another programming language. JavaScript uses npm for package management, Ruby uses gem, and the .NET platform uses NuGet. In Python, pip has become the standard package manager.

Finding pip on Your System

The Python installer gives you the option to install pip when installing Python on your system. In fact, the option to install pip with Python is checked by default, so pip should be ready for you to use after installing Python.

Note: On some Linux (Unix) systems like Ubuntu, pip comes in a separate package called python3-pip, which you need to install with sudo apt install python3-pip. It's not installed by default with the interpreter.

You can verify that pip is available by looking for the pip3 executable on your system. Select your operating system below and use your platform-specific command accordingly:

Windows PowerShell
PS> where pip3
Copied!

The where command on Windows will show you where you can find the executable of pip3. If Windows can't find an executable named pip3, then you can also try looking for pip without the three (3) at the end.

Shell
$ which pip3
Copied!

The which command on Linux systems and macOS shows you where the pip3 binary file is located.

On Windows and Unix systems, pip3 may be found in more than one location. This can happen when you have multiple Python versions installed. If you can't find pip in any location on your system, then you may consider reinstalling pip.

Instead of running your system pip directly, you can also run it as a Python module. In the next section, you'll learn how.

Running pip as a Module

When you run your system pip directly, the command itself doesn't reveal which Python version pip belongs to. This unfortunately means that you could use pip to install a package into the site-packages of an old Python version without noticing. To prevent this from happening, you should run pip as a Python module:

Shell
$ python -m pip
Copied!

Notice that you use python -m to run pip. The -m switch tells Python to run a module as an executable of the python interpreter. This way, you can ensure that your system default Python version runs the pip command. If you want to learn more about this way of running pip, then you can read Brett Cannon's insightful article about the advantages of using python -m pip.

Note: Depending on how you installed Python, your Python executable may have a different name than python. You'll see python used in this tutorial, but you may have to adapt the commands to use something like py or python3 instead.

Sometimes you may want to be more explicit and limit packages to a specific project. In situations like this, you should run pip inside a virtual environment.

Using pip in a Python Virtual Environment

Read the full article at https://realpython.com/what-is-pip/ »


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

16 Sep 2024 2:00pm GMT

feedDrupal.org aggregator

ThinkDrop Consulting: Reflections on OpenDevShop and the hidden costs of open source maintainership.

Reflections on OpenDevShop and the hidden costs of open source maintainership. Jon Pugh

#OpenDevShop failed because it tried to solve too many problems at the same time.

This directed the energy away from designing for the future. When the future arrived, it was wholly unprepared.

I saw the potential to make #Aegir an all-in-one management console for all your web tech, so I created server management things, and local CLI things, and other silly, not so useful things.

DevShop became a huge burden. Unmaintainable. Un-upgradable. Working untold unpaid hours, self-funded travel and speaking took a major toll on my life, financially and personally.

16 Sep 2024 1:34pm GMT

The Drop Times: QED42's Journey in Shaping Digital Experiences: Insights from Piyuesh Kumar

Discover how QED42 is shaping the future of digital experiences! In this exclusive interview, Piyuesh Kumar, Director of Technology at QED42, shares insights on their journey with Drupal, their groundbreaking contributions, and the role of AI in transforming content management systems. Get a sneak peek into the upcoming advancements in Drupal and what to expect at DrupalCon Barcelona 2024. Don't miss this deep dive into QED42's vision and impact!

16 Sep 2024 1:10pm GMT

feedPlanet Python

TechBeamers Python: How to Create Dynamic QR Code in Python

This tutorial guides you on how to create dynamic QR codes in Python. It involves a bit more than just generating the QR code itself. Before reading this, you must know how a QR code generator works. Steps to Create Dynamic QR Codes Dynamic QR codes require the ability to track and update the information […]

The post How to Create Dynamic QR Code in Python appeared first on TechBeamers.

16 Sep 2024 12:14pm GMT

feedDZone Java Zone

How to Merge Excel XLSX Files in Java

In this article, we're going to learn how to increase the efficiency of a common file merging workflow with the help of a web API solution. Specifically, we are going to learn how to merge Excel XLSX files - one of the most common document types we can expect to work within a file-processing automation environment.

Context for Programmatic XLSX File Merging

If we're writing code to solve problems related to file processing efficiency and automation, there's a good chance we're creating and/or expanding applications that deal with large volumes of Excel files. We may, for example, find that there's no automated workflow currently in place (or, perhaps, an inefficient one) to combine the various unique Excel reports created by each individual department in our organization.

16 Sep 2024 12:00pm GMT

feedPlanet Python

PyCharm: 7 Ways To Use Jupyter Notebooks inside PyCharm

Jupyter notebooks allow you to tell stories by creating and sharing data, equations, and visualizations sequentially, with a supporting narrative as you go through the notebook.

Jupyter notebooks in PyCharm Professional provide functionality above and beyond that of browser-based Jupyter notebooks, such as code completion, dynamic plots, and quick statistics, to help you explore and work with your data quickly and effectively.

Let's take a look at 7 ways you can use Jupyter notebooks in PyCharm to achieve your goals. They are:

The Jupyter notebook that we used in this demo is available on GitHub.

1. Creating or connecting to an existing notebook

You can create and work on your Jupyter notebooks locally or connect to one remotely with PyCharm. Let's take a look at both options so you can decide for yourself.

Creating a new Jupyter notebook

To work with a Jupyter notebook locally, you need to go to the Project tool window inside PyCharm, navigate to the location where you want to add the notebook, and invoke a new file. You can do this by using either your keyboard shortcuts ⌘N (macOS) / Alt+Ins (Windows/Linux) or by right-clicking and selecting New | Jupyter Notebook.

Give your new notebook a name, and PyCharm will open it ready for you to start work. You can also drag local Jupyter notebooks into PyCharm, and the IDE will automatically recognise them for you.

Connecting to a remote Jupyter notebook

Alternatively, you can connect to a remote Jupyter notebook by selecting Tools | Add Jupyter Connection. You can then choose to start a local Jupyter server, connect to an existing running local Jupyter server, or connect to a Jupyter server using a URL - all of these options are supported.

Now you have your Jupyter notebook, you need some data!

2. Importing your data

Data generally comes in two formats, CSV or database. Let's look at importing data from a CSV file first.

Importing from a CSV file

Polars and pandas are the two most commonly used libraries for importing data into Jupyter notebooks. I'll give you code for both in this section, and you can check out the documentation for both Polars and pandas and learn how Polars is different to pandas.

You need to ensure your CSV is somewhere in your PyCharm project, perhaps in a folder called `data`. Then, you can invoke import pandas and subsequently use it to read the code in:

import pandas as pd
df = pd.read_csv("../data/airlines.csv")

In this example, airlines.csv is the file containing the data we want to manipulate. To run this and any code cell in PyCharm, use ⇧⏎ (macOS) / Shift+Enter (Windows/Linux). You can also use the green run arrows on the toolbar at the top.

If you prefer to use Polars, you can use this code:

import polars as pl
df = pl.read_csv("../data/airlines.csv")

Importing from a database

If your data is in a database, as is often the case for internal projects, importing it into a Jupyter notebook will require just a few more lines of code. First, you need to set up your database connection. In this example, we're using postgreSQL.

For pandas, you need to use this code to read the data in:

import pandas as pd
engine = create_engine("postgresql://jetbrains:jetbrains@localhost/demo")
df = pd.read_sql(sql=text("SELECT * FROM airlines"),
                      con=engine.connect())

And for Polars, it's this code:

import polars as pl
engine = create_engine("postgresql://jetbrains:jetbrains@localhost/demo")
connection = engine.connect()
query = "SELECT * FROM airlines"
df = pl.read_database(query, connection)

3. Getting acquainted with your data

Now we've read our data in, we can take a look at the DataFrame or `df` as we will refer to it in our code. To print out the DataFrame, you only need a single line of code, regardless of which method you used to read the data in:

df

DataFrames

PyCharm displays your DataFrame as a table firstly so you can explore it. You can scroll horizontally through the DataFrame and click on any column header to order the data by that column. You can click on the Show Column Statistics icon on the right-hand side and select Compact or Detailed to get some helpful statistics on each column of data.

Dynamic charts

You can use PyCharm to get a dynamic chart of your DataFrame by clicking on the Chart View icon on the left-hand side. We're using pandas in this example, but Polars DataFrames also have the same option.

Click on the Show Series Settings icon (a cog) on the right-hand side to configure your plot to meet your needs:

In this view, you can hover your mouse over your data to learn more about it and easily spot outliers:

You can do all of this with Polars, too.

4. Using JetBrains AI Assistant

JetBrains AI Assistant has several offerings that can make you more productive when you're working with Jupyter notebooks inside PyCharm. Let's take a closer look at how you can use JetBrains AI Assistant to explain a DataFrame, write code, and even explain errors.

Explaining DataFrames

If you've got a DataFrame but are unsure where to start, you can click the purple AI icon on the right-hand side of the DataFrame and select Explain DataFrame. JetBrains AI Assistant will use its context to give you an overview of the DataFrame:

You can use the generated explanation to aid your understanding.

Writing Code

You can also get JetBrains AI Assistant to help you write code. Perhaps you know what kind of plot you want, but you're not 100% sure what the code should look like. Well, now you can use JetBrains AI Assistant to help you. Let's say you want to use 'matplotlib' to create a chart that finds the relationship between 'TimeMonthName' and 'MinutesDelayedWeather'. By specifying the column names, we're giving more context to the request which improves the reliability of the generated code. Try it with the following prompt:

Give me code using matplotlib to create a chart which finds the relationship between 'TimeMonthName' and 'MinutesDelayedWeather' for my dataframe df

If you like the resulting code, you can use the Insert Snippet at Caret button to insert the code and then run it:

import matplotlib.pyplot as plt
# Assuming your data is in a DataFrame named 'df'
# Replace 'df' with the actual name of your DataFrame if different


# Plotting
plt.figure(figsize=(10, 6))
plt.bar(df['TimeMonthName'], df['MinutesDelayedWeather'], color='skyblue')
plt.xlabel('Month')
plt.ylabel('Minutes Delayed due to Weather')
plt.title('Relationship between TimeMonthName and MinutesDelayedWeather')
plt.xticks(rotation=45)
plt.grid(axis='y', linestyle='--', alpha=0.7)
plt.tight_layout()


plt.show()

If you don't want to open the AI Assistant tool window, you can use the AI cell prompt to ask your questions. For example, we can ask the same question here and get the code we need:

Explaining errors

You can also get JetBrains AI Assistant to explain errors for you. When you get an error, click Explain with AI:

You can use the resulting output to further your understanding of the problem and perhaps even get some code to fix it!

5. Exploring your code

PyCharm can help you get an overview of your Jupyter notebook, complete parts of your code to save your fingers, refactor it as required, debug it, and even add integrations to help you take it to the next level.

Tips for navigating and optimizing your code

Our Jupyter notebooks can grow large quite quickly, but thankfully you can use PyCharm's Structure view to see all your notebook's headings by clicking ⌘7 (macOS) / Alt+7 (Windows/Linux).

Code completion

Another helpful feature that you can take advantage of when using Jupyter notebooks inside PyCharm is code completion. You get both basic and type-based code completion out of the box with PyCharm, but you can also enable Full Line Code Completion in PyCharm Professional, which uses a local AI model to provide suggestions. Lastly, JetBrains AI Assistant can also help you write code and discover new libraries and frameworks.

Refactoring

Sometimes you need to refactor your code, and in that case, you only need to know one keyboard shortcut ⌃T (macOS) / Shift+Ctrl+Alt+T (Windows/Linux) then you can choose the refactoring you want to invoke. Pick from popular options such as Rename, Change Signature, and Introduce Variable, or lesser-known options such as Extract Method, to change your code without changing the semantics:

As your Jupyter notebook grows, it's likely that your import statements will also grow. Sometimes you might import a package such as polars and numpy, but forget that numpy is a transitive dependency of the Polars library and as such, we don't need to import it separately.

To catch these cases and keep your code tidy, you can invoke Optimize Imports ⌃⌥O (macOS) / Ctrl+Alt+O (Windows/Linux) and PyCharm will remove the ones you don't need.

Debugging your code

You might not have used the debugger in PyCharm yet, and that's okay. Just know that it's there and ready to support you when you need to better understand some behavior in your Jupyter notebook.

Place a breakpoint on the line you're interested in by clicking in the gutter or by using ⌘F8 (macOS) / Ctrl+F8 (Windows/Linux), and then run your code with the debugger attached with the debug icon on the top toolbar:

You can also invoke PyCharm's debugger in your Jupyter notebook with ⌥⇧⏎ (macOS) / Shift+Alt+Enter (Windows/Linux). There are some restrictions when it comes to debugging your code in a Jupyter notebook, but please try this out for yourself and share your feedback with us.

Adding integrations into PyCharm

IDEs wouldn't be complete without the integrations you need. PyCharm Professional 2024.2 brings two new integrations to your workflow: DataBricks and HuggingFace.

You can enable the integrations with both Databricks and HuggingFace by going to your Settings <kbd></kbd> (macOS) / <kbd>Ctrl+Alt+S</kbd> (Windows/Linux), selecting Plugins and searching for the plugin with the corresponding name on the Marketplace tab.

6. Getting insights from your code

When analyzing your data, there's a difference between categorical and continuous variables. Categorical data has a finite number of discrete groups or categories, whereas continuous data is one continuous measurement. Let's look at how we can extract different insights from both the categorical and continuous variables in our airlines dataset.

Continuous variables

We can get a sense of how continuous data is distributed by looking at measures of the average value in that data and the spread of the data around the average. In normally distributed data, we can use the mean to measure the average and the standard deviation to measure the spread. However, when data is not distributed normally, we can get more accurate information using the median and the interquartile range (this is the difference between the seventy-fifth and twenty-fifth percentiles). Let's look at one of our continuous variables to understand the difference between these measurements.

In our dataset, we have lots of continuous variables, but we'll work with `NumDelaysLateAircraft` to see what we can learn. Let's use the following code to get some summary statistics for just that column:

df['NumDelaysLateAircraft'].describe()

Looking at this data, we can see that there is a big difference between the `mean` of ~789 and the 'median' (our fiftieth percentile, indicated by "50%" in the table below) of ~618.

This indicates a skew in our variable's distribution, so let's use PyCharm to explore it further. Click on the Chart View icon at the top left. Once the chart has been rendered, we'll change the series settings represented by the cog on the right-hand side of the screen. Change your x-axis to `NumDelaysLateAircraft` and your y-axis to `NumDelaysLateAircraft`.

Now drop down the y-axis using the little arrow and select `count`. The final step is to change the chart type to Histogram using the icons in the top-right corner:

Now that we can see the skew laid out visually, we can see that most of the time, the delays are not too excessive. However, we have a number of more extreme delays - one aircraft is an outlier on the right and it was delayed by 4,509 minutes, which is just over three days!

In statistics, the mean is very sensitive to outliers because it's a geometric average, unlike the median, which, if you ordered all observations in your variable, would sit exactly in the middle of these values. When the mean is higher than the median, it's because you have outliers on the right-hand side of the data, the higher side, as we had here. In such cases, the median is a better indicator of the true average delay, as you can see if you look at the histogram.

Categorical variables

Let's take a look at how we can use code to get some insights from our categorical variables. In order to get something that's a little more interesting than just `AirportCode`, we'll analyze how many aircraft were delayed by weather, `NumDelaysWeather`, in the different months of the year, `TimeMonthName`.

Use this code to group `NumDelaysWeather` with `TimeMonthName`:

result = df[['TimeMonthName', 'NumDelaysWeather']].groupby('TimeMonthName').sum()
result

This gives us the DataFrame again in table format, but click the Chart View icon on the left-hand side of the PyCharm UI to see what we can learn:

This is okay, but it would be helpful to have the months ordered according to the Gregorian calendar. Let's first create a variable for the months that we expect:

month_order = [
   "January", "February", "March", "April", "May", "June",
   "July", "August", "September", "October", "November", "December"
]

Now we can ask PyCharm to use the order that we've just defined in `month_order`:

# Convert the 'TimeMonthName' column to a categorical type with the specified order
df["TimeMonthName"] = pd.Categorical(df["TimeMonthName"], categories=month_order, ordered=True)


# Now you can group by 'TimeMonthName' and perform sum operation, specifying observed=False
result = df[['TimeMonthName', 'NumDelaysWeather']].groupby('TimeMonthName', observed=False).sum()


result

We then click on the Chart View icon once more, but something's wrong!

Are we really saying that there were no flights delayed in February? That can't be right. Let's check our assumption with some more code:

df['TimeMonthName'].value_counts()

Aha! Now we can see that `Febuary` has been misspelt in our data set, so the correct spelling in our variable name does not match. Let's update the spelling in our dataset with this code:

df["TimeMonthName"] = df["TimeMonthName"].replace("Febuary", "February")
df['TimeMonthName'].value_counts()

Great, that looks right. Now we should be able to re-run our earlier code and get a chart view that we can interpret:

From this view, we can see that there is a higher number of delays during the months of December, January, and February, and then again in June, July, and August. However, we have not standardized this data against the total number of flights, so there may just be more flights in those months, which would cause these results along with an increased number of delays in those summer and winter months.

7. Sharing your insights and charts

When your masterpiece is complete, you'll probably want to export data, and you can do that in various ways with Jupyter notebooks in PyCharm.

Exporting a DataFrame

You can export a DataFrame by clicking on the down arrow on the right-hand side:

You have lots of helpful formats to choose from, including SQL, CSV, and JSON:

Exporting charts

If you prefer to export the interactive plot, you can do that too by clicking on the Export to PNG icon on the right-hand side:

Viewing your notebook as a browser

You can view your whole Jupyter notebook at any time in a browser by clicking the icon in the top-right corner of your notebook:

Finally, if you want to export your Jupyter notebook to a Python file, 2024.2 lets you do that too! Right-click on your Jupyter notebook in the Project tool window and select Convert to Python File. Follow the instructions, and you're done!

Summary

Using Jupyter notebooks inside PyCharm Professional provides extensive functionality, enabling you to create code faster, explore data easily, and export your projects in the formats that matter to you.

Download PyCharm Professional to try it out for yourself! Get an extended trial today and experience the difference PyCharm Professional can make in your data science endeavors.

Use the promo code "PyCharmNotebooks" at checkout to activate your free 60-day subscription to PyCharm Professional. The free subscription is available for individual users only.

16 Sep 2024 10:48am GMT

15 Sep 2024

feedDrupal.org aggregator

#! code: Drupal 11: Using The Finished State In Batch Processing

This is the third article in a series of articles about the Batch API in Drupal. The Batch API is a system in Drupal that allows data to be processed in small chunks in order to prevent timeout errors or memory problems.

So far in this series we have looked at creating a batch process using a form and then creating a batch class so that batches can be run through Drush. Both of these examples used the Batch API to run a set number of items through a set number of process function callbacks. When setting up the batch run we created a list of items that we wanted to process and then split this list up into chunks, each chunk being sent to a batch process callback.

There is another way to set up the Batch API that will run the same number of operations without defining how many times we want to run them first. This is possible by using the "finished" setting in the batch context.

Let's create a batch process that we can run and control using the finished setting.

Setting Up

First we need to create a batch process that will accept the array we want to process. This is the same array as we have processed in the last two articles, but in this case we are passing the entire array to a single callback via the addOperation() method of the BatchBuilder class.

Read more

15 Sep 2024 6:51pm GMT

14 Sep 2024

feedLinux Today

How to Install MariaDB on Ubuntu 24.04: A Step-By-Step Guide

Discover a step-by-step guide to install and configure MariaDB on Ubuntu and how to update and remove it with practical examples.

The post How to Install MariaDB on Ubuntu 24.04: A Step-By-Step Guide appeared first on Linux Today.

14 Sep 2024 12:00am GMT

13 Sep 2024

feedLinux Today

Ubuntu 22.04.5 LTS (Jammy Jellyfish) Is Now Available Powered by Linux Kernel 6.8

Ubuntu 22.04.5 LTS is here six and a half months after Ubuntu 22.04.4 LTS as an up-to-date installation media designed for those of you who want to install Ubuntu 22.04 LTS (Jammy Jellyfish) on a new computer and don't want to download hundreds of updated packages from the repositories after the installation.

The post Ubuntu 22.04.5 LTS (Jammy Jellyfish) Is Now Available Powered by Linux Kernel 6.8 appeared first on Linux Today.

13 Sep 2024 8:00pm GMT

KDE Gear 24.08.1 Apps Collection Rolls Out, Here’s What’s New

The KDE Gear 24.08.1 apps collection is now available for download, with key fixes for Dolphin, KClock, and Filelight and updated translations.

The post KDE Gear 24.08.1 Apps Collection Rolls Out, Here's What's New appeared first on Linux Today.

13 Sep 2024 6:45pm GMT

feedDjango community aggregator: Community blog posts

django-content-editor now supports nested sections

django-content-editor now supports nested sections

django-content-editor (and it's ancestor FeinCMS) has been the Django admin extension for editing content consisting of reusable blocks since 2009. In the last years we have more and more often started automatically grouping related items, e.g. for rendering a sequence of images as a gallery. But, sometimes it's nice to give editors more control. This has been possible by using blocks which open a subsection and blocks which close a subsection for a long time, but it hasn't been friendly to content managers, especially when using nested sections.

The content editor now has first-class support for such nested sections. Here's a screenshot showing the nesting:

django-content-editor with sections

Finally it's possible to visually group blocks into sections, collapse those sections as once and drag and drop whole sections into their place instead of having to select the involved blocks individually.

The best part about it is that the content editor still supports all Django admin widgets, as long as those widgets have support for the Django administration interface's inline form events! Moving DOM nodes around breaks attached JavaScript behaviors, but we do not actually move DOM nodes around after the initialization - instead, we use Flexbox ordering to visually reorder blocks. It's a bit more work than using a ready-made sortable plugin, but - as mentioned - the prize is that we don't break any other Django admin extensions.

Simple patterns

I previously already reacted to a blog post by Lincoln Loop here in my post My reaction to the block-driven CMS blog post.

The latest blog post, Solving the Messy Middle: a Simple Block Pattern for Wagtail CMS was interesting as well. It dives into the configuration of a Wagtail stream field which allows composing content out of reusable blocks of content (sounds familiar!). The result is saved in a JSON blob in the database with all the advantages and disadvantages that entails.

Now, django-content-editor is a worthy competitor when you do not want to add another interface to your website besides the user-facing frontend and the Django administration interface.

The example from the Lincoln Loop blog post can be replicated quite closely with django-content-editor by using sections. I'm using the django-json-schema-editor package for the section plugin since it easily allows adding more fields if some section type needs it.

Here's an example model definition:

# Models
from content_editor.models import Region, create_plugin_base
from django_json_schema_editor.plugins import JSONPluginBase
from feincms3 import plugins

class Page(models.Model):
    # You have to define regions; each region gets a tab in the admin interface
    regions = [Region(key="content", title="Content")]

    # Additional fields for the page...

PagePlugin = create_plugin_base(Page)

class RichText(plugins.richtext.RichText, PagePlugin):
    pass

class Image(plugins.image.Image, PagePlugin):
    pass

class Section(JSONPluginBase, PagePlugin):
    pass

AccordionSection = Section.proxy(
    "accordion",
    schema={"type": "object", {"properties": {"title": {"type": "string"}}}},
)
CloseSection = Section.proxy(
    "close",
    schema={"type": "object", {"properties": {}}},
)

Here's the corresponding admin definition:

# Admin
from content_editor.admin import ContentEditor
from django_json_schema_editor.plugins import JSONPluginInline
from feincms3 import plugins

@admin.register(models.Page)
class PageAdmin(ContentEditor):
    inlines = [
        plugins.richtext.RichTextInline.create(models.RichText),
        plugins.image.ImageInline.create(models.Image),
        JSONPluginInline.create(models.AccordionSection, sections=1),
        JSONPluginInline.create(models.CloseSection, sections=-1),
    ]

The somewhat cryptic sections= argument says how many levels of sections the individual blocks open or close.

To render the content including accordions I'd probably use a feincms3 renderer. At the time of writing the renderer definition for sections is a bit tricky.

from feincms3.renderer import RegionRenderer, render_in_context, template_renderer

class PageRenderer(RegionRenderer):
    def handle(self, plugins, context):
        plugins = deque(plugins)
        yield from self._handle(plugins, context)

    def _handle(self, plugins, context, *, in_section=False):
        while plugins:
            if isinstance(plugins[0], models.Section):
                section = plugins.popleft()
                if section.type == "close":
                    if in_section:
                        return
                    # Ignore close section plugins when not inside section
                    continue

                if section.type == "accordion":
                    yield render_in_context("accordion.html", {
                        "title": accordion.data["title"],
                        "content": self._handle(plugins, context, in_section=True),
                    })

            else:
                yield self.render_plugin(plugin, context)

renderer = PageRenderer()
renderer.register(models.RichText, template_renderer("plugins/richtext.html"))
renderer.register(models.Image, template_renderer("plugins/image.html"))
renderer.register(models.Section, "")

Closing thoughts

Sometimes, I think to myself, I'll "just" write a "simple" blog post. I get what I deserve when using those forbidden words. This blog post is neither short or simple. That being said, the rendering code is a bit tricky, the rest is quite straightforward. The amount of code in django-content-editor and feincms3 is reasonable as well. Even though it may look like a lot you'll still be running less code in production than when using comparable solutions built using Django.

13 Sep 2024 5:00pm GMT

feedPlanet Lisp

Patrick Stein: I Broke It

Unfortunately, I managed to delete my WordPress database at a time when the most recent backup I had was from 11 years ago.

So… I will hopefully get some newer information uploaded again sometime.

But, most of my content is gone. 🙁

13 Sep 2024 3:46pm GMT

feedDjango community aggregator: Community blog posts

Django News - Python 3.13.0RC2 - Sep 13th 2024

News

Python 3.13.0RC2 and security updates for 3.8 through 3.12

Python 3.13.0RC2 and security updates for Python 3.12.6, 3.11.10, 3.10.15, 3.9.20, and 3.8.20 are now available!

blogspot.com

DjangoCon US 2024 last call!

DjangoCon US starts September 22nd. It's the last call to buy an in-person or online ticket to attend this year!

ti.to

Python in Visual Studio Code - September 2024 Release

The Python extension now supports Django unit tests.

microsoft.com

Updates to Django

Today 'Updates to Django' is presented by Raffaella Suardini from Djangonaut Space!

Last week we had 12 pull requests merged into Django by 10 different contributors - including 4 first-time contributors! Congratulations to SirenityK, Mariatta, Wassef Ben Ahmed and github-user-en for having their first commits merged into Django - welcome on board!

Last chance to apply for Djangonaut Space 🚀

The application will close on September 14, for more information check this article that explains the selection process. Apply here

Django Newsletter

Sponsored Link 1

HackSoft - Your Django Development Partner Beyond Code

Elevate your Django projects with HackSoft! Try our expert consulting services and kickstart your project.

hacksoft.io

Articles

Django from first principles, part 18

The final post in a series on building and refactoring a Django blog site from scratch.

mostlypython.com

Django: rotate your secret key, fast or slow

Adam Johnson covers the two main ways to rotate secret keys, including a Django 4.1 feature that allows rotating to a new key whilst accepting data signed with the old one.

adamj.eu

django-filter: filtering a foreign key model property

How to filter a foreign key model property with django-filter.

valentinog.com

Django: a pattern for settings-configured API clients

How to get around the problem that an API client is instantiated as a module-level variable based on some settings.

adamj.eu

UV with Django

Using UV to manage dependencies of your Django application.

pecar.me

Signatures are like backups · Alex Gaynor

"Backups don't matter, only restores matter."

alexgaynor.net

Tutorials

Django-allauth: Site Matching Query Does Not Exist

How to fix a common configuration mistake in django-allauth.

learndjango.com

Videos

Djangonaut Space Overview and Ask Me Anything (AMA)

This is an explanation of the Djangonaut Space program sessions, with a Q&A at the end. It has specific details on Session 3 of 2024, but the information is relevant for future sessions.

Session 3 applications are closed on September 14th, so apply if interested!

youtu.be

DjangoCon EU 2013 - Class-Based Views: Untangling the mess

This talk is from 2013, but it is still relevant to anyone dealing with function-based and (generic) class-based views. Russell Keith-Magee goes into the history of why GCBVs were added.

youtu.be

Sponsored Link 2

Try Scout APM for free!

Sick of performance issues? Enter Scout's APM tool for Python apps. Easily pinpoint and fix slowdowns with intelligent tracing logic. Optimize performance hassle-free, delighting your users.

ter.li

Podcasts

Django Chat #165: Fall 2024 Podcast Relaunch

This mini-episode starts off the fall season and focuses on what's new in Django, upcoming DjangoCon US talks, thoughts on the User model, Carlton's new Stack Report newsletter, mentoring mentors, and more.

djangochat.com

Django News Jobs

Back-end developers at ISM Fantasy Games 🆕

Python Engineer - API and SaaS Application Development at Aidentified, LLC

Software Developer at Habitat Energy

Senior Fullstack Python Engineer at Safety Cybersecurity

Django Newsletter

Projects

kennethlove/django-migrator

The Migrator project provides custom Django management commands to manage database migrations. It includes commands to revert and redo migrations for a specified app or the entire project.

github.com

carltongibson/django-unique-user-email

Enable login-by-email with the default User model for your Django project by making auth.User.email unique.

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

13 Sep 2024 3:00pm GMT

feedKernel Planet

Linux Plumbers Conference: Playback of Presenter and BBB Training is available

We recorded a playback of the 10:00 session which you can watch:

https://bbb1.lpc.events/playback/presentation/2.3/62e3456da3c0598910e28d204ee24b669d714c04-1725975646004

To get a feel for how the BBB platform works. In addition, your credentials are the email address you registered with in cvent and the confirmation number of the registration it sent you back. You can use those to log in here:

https://meet.lpc.events

And practice in a Hackroom (after logging in select Hackrooms from the leftnav and then pick a Hackroom which is empty).

13 Sep 2024 2:37pm GMT

feedDZone Java Zone

How To Solve OutOfMemoryError: Metaspace

There are 9 types of java.lang.OutOfMemoryErrors, each signaling a unique memory-related issue within Java applications. Among these, java.lang.OutOfMemoryError: Metaspace is a challenging error to diagnose. In this post, we'll delve into the root causes behind this error, explore potential solutions, and discuss effective diagnostic methods to troubleshoot this problem. Let's equip ourselves with the knowledge and tools to conquer this common adversary.
OutofMemoryError: Metaspace graphic

JVM Memory Regions

To better understand OutOfMemoryError, we first need to understand different JVM Memory regions. Here is a video clip that gives a good introduction to different JVM memory regions. But in a nutshell, JVM has the following memory regions:

JVM Memory Regions

Figure 1: JVM memory regions

13 Sep 2024 2:00pm GMT

feedPlanet Lisp

Yukari Hafner: Porting SBCL to the Nintendo Switch

https://filebox.tymoon.eu//file/TWpjNU5nPT0=

For the past two years Charles Zhang and I have been working on getting my game engine, Trial, running on the Nintendo Switch. The primary challenge in doing this is porting the underlying Common Lisp runtime to work on this platform. We knew going into this that it was going to be hard, but it has proven to be quite a bit more tricky than expected. I'd like to outline some of the challenges of the platform here for posterity, though please also understand that due to Nintendo's NDA I can't go into too much detail.

Current Status

I want to start off with where we are at, at the time of writing this article. We managed to port the runtime and compiler to the point where we can compile and execute arbitrary lisp code directly on the Switch. We can also interface with shared libraries, and I've ported a variety of operating system portability libraries that Trial needs to work on the Switch as well.

The above photo shows Trial's REPL example running on the Switch devkit. Trial is setting up the OpenGL context, managing input, allocating shaders, all that good stuff, to get the text shown on screen; the Switch does not offer a terminal of its own.

Unfortunately it also crashes shortly after as SBCL is trying to engage its garbage collector. The Switch has some unique constraints in that regard that we haven't managed to work around quite yet. We also can't output any audio yet, since the C callback mechanism is also broken. And of course, there's potentially a lot of other issues yet to rear their head, especially with regards to performance.

Whatever the case, we've gotten pretty far! This work hasn't been free, however. While I'm fine not paying myself a fair salary, I can't in good conscience have Charles invest so much of his valuable time into this for nothing. So I've been paying him on a monthly basis for all the work he's been doing on this port. Up until now that has cost me ~17'000 USD. As you may or may not know, I'm self-employed. All of my income stems from sales of Kandria and donations from generous supporters on Patreon, GitHub, and Ko-Fi. On a good month this totals about 1'200 USD. On a bad month this totals to about 600 USD. That would be hard to get by in a cheap country, and it's practically impossible in Zürich, Switzerland.

I manage to get by by living with my parents and being relatively frugal with my own personal expenses. Everything I actually earn and more goes back into hiring people like Charles to do cool stuff. Now, I'm ostensibly a game developer by trade, and I am working on a currently unannounced project. Games are very expensive to produce, and I do not have enough reserves to bankroll it anymore. As such, it has become very difficult to decide what to spend my limited resources on, and especially a project like this is much more likely to be axed given that I doubt Kandria sales on the Switch would even recoup the porting costs.

To get to the point: if you think this is a cool project and you would like to help us make the last few hurdles for it to be completed, please consider supporting me on Patreon, GitHub, or Ko-Fi. On Patreon you get news for every new library I release (usually at least one a month) and an exclusive monthly roundup of the current development progress of the unannounced game. Thanks!

An Overview

First, here's what's publicly known about the Switch's environment: user code runs on an ARM64 Cortex-A57 chip with four cores and 4 GB RAM, and on top of a proprietary microkernel operating system that was initially developed for the Nintendo 3Ds.

SBCL already has an ARM64 Linux port, so the code generation side is already solved. Kandria also easily fits into 4GB RAM, so there's no issues there either. The difficulties in the port reside entirely in interfacing with the surrounding proprietary operating system of the switch. The system has some constraints that usual PC operating systems do not have, which are especially problematic for something like Lisp as you'll see in the next section.

Fortunately for us, and this is the reason I even considered a port in the first place, the Switch is also the only console to support the OpenGL graphics library for rendering, which Trial is based upon. Porting Trial itself to another graphics library would be a gigantic effort that I don't intend on undertaking any time soon. The Xbox only supports DirectX, though supposedly there's an OpenGL -> DirectX layer that Microsoft developed, so that might be possible. The Playstation on the other hand apparently still sports a completely proprietary graphics API, so I don't even want to think about porting to that platform.

Anyway, in order to get started developing I had to first get access. I was lucky enough that Nintendo of Europe is fairly accommodating to indies and did grant my request. I then had to buy a devkit, which costs somewhere around 400 USD. The devkit and its SDK only run on Windows, which isn't surprising, but will also be a relevant headache later.

Before we can get on to the difficulties in building SBCL for the Switch, let's first take a look at how SBCL is normally built on a PC.

Building SBCL

SBCL is primarily written in Lisp itself. There is a small C runtime as well, which you use a usual C compiler to compile, but before it can do that, there's some things it needs to know about the operating system environment it compiles for. The runtime also doesn't have a compiler of its own, so it can't compile any Lisp code. In order to get the whole process kicked off, SBCL requires another Lisp implementation to bootstrap with, ideally another version of itself.

The build then proceeds in roughly five phases:

  1. build-config
    This step just gathers whatever build configuration options you want for your target and spits them out into a readable format for the rest of the build process.

  2. make-host-1

    Now we build the cross-compiler with the host Lisp compiler, and at the same time emit C header files describing Lisp object layouts in memory as C structs for the next step.

  3. make-target-1

    Next we run the target C compiler to create the C runtime. As mentioned, this uses a standard C compiler, which can itself be a cross-compiler. The C runtime includes the garbage collector and other glue to the operating system environment. This step also produces some constants the target Lisp compiler and runtime needs to know about by using the C compiler to read out relevant operating system headers.

  4. make-host-2

    With the target runtime built, we build the target Lisp system (compiler and the standard library) using the Lisp cross-compiler built by the Lisp host compiler in make-host-1. This step produces a "cold core" that the runtime can jump into, and can be done purely on the host machine. This cold core is not complete, and needs to be executed on the target machine with the target runtime to finish bootstrapping, notably to initialize the object system, which requires runtime compilation. This is done in

  5. make-target-2

    The cold core produced in the last step is loaded into the target runtime, and finishes the bootstrapping procedure to compile and load the rest of the Lisp system. After the Lisp system is loaded into memory, the memory is dumped out into a "warm core", which can be loaded back into memory in a new process with the target runtime. From this point on, you can load new code and dump new images at will.

Notable here is the need to run Lisp code on the target machine itself. We can't cross-compile "purely" on the host, not in the least because user Lisp code cannot be compiled without also being run like batch-compiled C code can, and when it is run it assumes that it is in the target environment. So we really don't have much of a choice in the matter.

In order to deploy an application, we proceed similar to make-target-2: We compile in Lisp code incrementally and then when we have everything we need we dump out a core with the runtime attached to it. This results in a single binary with a data blob attached.

When the SBCL runtime starts up it looks for a core blob, maps it into memory, marks pages with code in them as executable, and then jumps to the entry function the user designated. This all is a problem for the Switch.

Building for the Switch

The Switch is not a PC environment. It doesn't have a shell, command line, or compiler suite on it to run the build as we usually do. Worse still, its operating system does not allow you to create executable pages, so even if we could run the compilation steps on there we couldn't incrementally compile anything on it like we usually do for Lisp code.

But all is not lost. Most of the code is not platform dependent and can simply be compiled for ARM64 as usual. All we need to do is make sure that anything that touches the surrounding environment in some way knows that we're actually trying to compile for the Switch, then we can use another ARM64 environment like Linux to create our implementation.

With that in mind, here's what our steps look like:

  1. build-config
    We run this on some host system, using a special flag to indicate that we're building for the Switch. We also enable the fasteval contrib. We need fasteval to step in for any place where we would usually invoke the compiler at runtime, since we absolutely cannot do that on the Switch.

  2. make-host-1

    This step doesn't change. We just get different headers that prep for the Switch platform.

  3. make-target-1

    Now we use the C compiler the Nintendo SDK provides for us, which can cross-compile for the Switch. Unfortunately the OS is not POSIX compliant, so we had to create a custom runtime target in SBCL that stubs out and papers over the operating system environment differences that we care about, like dynamic linking, mapping pages, and so on.
    Here is where things get a bit weird. We are now moving on to compiling Lisp code, and we want to do so on a Linux host system. So we have to...

  4. build-config (2)

    We now create a normal ARM64 Linux system with the same feature set as for the Switch. This involves the usual steps as before, though with a special flag to inform some parts of the Lisp process that we're going to ultimately target the Switch.

  5. make-host-1 (2)

  6. make-target-1 (2)

  7. make-host-2

  8. make-target-2

    With all of this done we now have a slightly special SBCL build for Linux ARM64. We can now move on to compiling user code.

  9. For user code we now perform some tricks to make it think it's running on the Switch, rather than on Linux. In particular we modify *features* to include :nx (the Switch code name) and not :linux, :unix, or :posix. Once that is set up and ASDF has been neutered, we can compile our program (like Trial) "as usual" and at the end dump out a new core.

We've solved the problem of actually compiling the code, but we still need to figure out how to get the code started on the Switch, since it does not allow us to do the usual core-mapping strategy. As such, attaching the new core to the runtime we made for the Switch won't work.

To make this work, we make use of two relatively unknown features of SBCL: immobile-code, and elfination. Usually when SBCL compiles code at runtime, it sticks it into a page somewhere, and marks that page executable. The code itself however could become unneeded at some point, at which point we'd like to garbage collect it. We can then reclaim the space it took up, and to do so compact the rest of the code around it. The immobile-code feature allows SBCL to take up a different strategy, where code is put into special reserved code pages and remains there. This means it can't be garbage collected, but it instead can take advantage of more traditional operating system support. Typically executables have pre-marked sections that the operating system knows to contain code, so it can take care of the mapping when the program is started, rather than the program doing it on its own like SBCL usually does.

OK, so we can generate code and prevent it from being moved. But we still have a core at the end of our build that we now need to transform into the separate code and data sections needed for a typical executable. This is done with the elfination step.

The elfinator looks at a core and performs assembly rewriting to make the code position-independent (a requirement for Address Space Layout Randomisation), and then tears it out into two separate files, a pure code assembly file, and a pure data payload file.

We can now take those two files and link them together with the runtime that the C compiler produced and get a completed SBCL that runs like any other executable would. So here's the last steps of the build process:

  1. Run the elfinator to generate the assembly files

  2. Link the final binary

  3. Run the Nintendo SDK's authoring tools to bundle metadata, shared libraries, assets, and the application binary into one final package

That's quite an involved build setup. Not to mention that we need at least an ARM64 Linux machine to run most of the build on, as well as either an AMD64 Windows machine (or an AMD64 Linux machine with Wine) to run the Nintendo SDK compiler and authoring tools.

I usually use an AMD64 Linux machine, so there's a total of three machines involved: The AMD64 "driver," the ARM64 build host, and a Windows VM to talk to the devkit with.

I wrote a special build system with all sorts of messed up caching and cross-machine synchronisation logic to automate all of this, which was quite a bit of work to get going, especially since the build should also be drivable from an MSYS2/Windows setup. Lots of fun with path mangling!

So now we have a full Lisp system, including user code, compiling for and being able to run on the Switch. Wow! I've skipped over a lot of the nitty-gritty dealing with getting the build properly aware of which target it's building for, making the elfinator and immobile-code working on ARM64, and porting all of the support libraries like pathname-utils, libmixed, cl-gamepad, etc. Again, most of the details we can't openly talk about due to the NDA. However, we have upstreamed what work we could, and all of the Lisp libraries don't have a private fork.

It's worth noting though that elfination wasn't initially designed to produce position independent executable Lisp code, which is usually full of absolute pointers. So we needed to do a lot of work in the SBCL compiler and runtime to support load time relocation of absolute pointers and make sure code objects (which usually contain code constants) no longer have absolute pointers, as the GC can't modify executable sections. Not even the OS loader is allowed to modify executable sections to relocate absolute pointer. We did this by relocating absolute pointers like code constants outside of the text space into a read-writable space close enough to rewrite constant references in code to load from this r/w space instead, which the loader and the moving GC can fixup pointers at.

Instead of interfacing directly with the Nintendo SDK, I've opted to create my own C libraries that have a custom interface the Lisp libraries interface with in order to access the operating system functionality it needs. That way I can at least publish the Lisp bits openly, and only keep the small C library private. Anyway, now that we can run stuff we're not done yet. Our system actually needs to keep running, too, and that brings us to

The Garbage Collector

Garbage collection is a huge topic in itself and there's a ton of different techniques to make it work efficiently. The standard GC for SBCL is called "gencgc", a Generational Garbage Collector. Generational meaning it keeps separate "generations" of objects and scans the generations in different frequencies, copying them over to another generation's location to compact the space. None of this is inherently an issue for the Switch, if it weren't for multithreading.

When multiple threads are involved, we can't just move objects around, as another thread could be accessing it at any time. The easiest way to resolve this conflict is to park all threads before engaging garbage collection. So the question becomes: when a thread wants to start garbage collection, how does it get the other threads to park?

On Unix systems a pretty handy trick is used: we can use the signalling mechanism to send a signal to the other threads, which then take that hint to park.

On the Switch we don't have any signal mechanism. In fact, we can't interrupt threads at all. So we instead need to somehow get each thread to figure out that it should park on its own. The typical strategy for this is called "safepoints".

Essentially we modify the compiler a little bit to inject some extra code that checks whether the thread should park or not. This strategy has some issues, namely:

The current safepoint system in SBCL was written for Windows, which similarly does not have inter-process signal handlers. However, unlike the Switch, it does still have signal handling for the current thread. So the current safepoint implementation was written with this strategy:

Each thread keeps a page around that a safepoint just writes a word to. When GC is engaged, those pages are marked as read-only, so that when the safepoint is hit and the other thread tries to write to the page, a segmentation fault is triggered and the thread can park. This is efficient, since we only need a single instruction to write into the page.

On the Switch we can't use this trick either, so we have to actually insert a more complex check, which can be tricky to get working as intended, as all parallel algorithms tend to be.

Since safepoints aren't necessary on any other platform than Windows, it also hasn't been tested anywhere else, so aside from modifying it for this new platform it's also just unstable. It is apparently quite a big mess in the code base and would ideally be redone from scratch, but hopefully we don't have to go quite that far.

I'd also like to give special mention to the issue that CLOS presents. Usually SBCL defers compilation of the "discriminating function" that is needed to dispatch to methods to the first call of the generic function. This is done because CLOS is highly dynamic and allows adding and removing methods pretty much at any time, and there's usually no good point in time that the system knows it is complete. Of course, on the Switch we can't invoke the compiler, so we can't really do this. For now our strategy has been to instead rely on the fast evaluator. We stub out the compile function to create a lambda that executes the code via the evaluator instead. This has the advantage of working with any user code that relies on compile as well, though it is obviously much slower for execution than it would be if we could actually compile.

This neatly brings us to

Future Work

The fasteval trick is mostly a fallback. Ideally I'd like to explore options to freeze as much of CLOS in place as possible right before the final image is dumped and compile as much as possible ahead of time. I'd also like to investigate the block compilation mode that Charles restored some years back more closely.

It's very possible that the Switch's underpowered processor will also force us to implement further optimisations, especially on the side of my engine and the code in Kandria itself. Up until now I've been able to get away with comparatively little optimisation, since even computers of ten years ago are more than fast enough to run what I need for the game. However, I'm not so sure that the Switch could match up to that even if it didn't also introduce additional constraints on performance with its lack of operating system support.

First, though, we need to get the garbage collector running fully. It runs enough to boot up and get into Trial's main loop, but as soon as it hits multi-generation compaction, it falls flat on its face.

Next we need to get callbacks from C working again. Apparently this is a part of the SBCL codebase that can only be described as "a mess," involving lots of hand-rolled assembly routines, which probably need some adjustments to work correctly with immobile-code and elfination. Callbacks fortunately are relatively rare, Trial only needs them for sound playback via libmixed.

There's also been some other issues that we've kept in the back of our heads but don't require our immediate attention, as well as some extra portability features I know I'll have to work on in Trial before its selftest suite fully passes on the Switch.

Conclusion

I'll be sure to add an addendum here should the state of the port significantly change in the future. Some people have also asked me if the work could be made public, or if I'd be willing to share it.

The answer to that is that while I would desperately like to share it all publicly, the NDA prevents us from doing so. We still upstream and publicise whatever we can, but some bits that tie directly into the Nintendo SDK cannot be shared with anyone that hasn't also signed the NDA. So, in the very remote possibility that someone other than me is crazy enough to want to publish a Common Lisp game on the Nintendo Switch, they can reach out to me and I'll happily give them access to our porting work once the NDA has been signed.

Naturally, I'll also keep people updated more closely on what's going on in the monthly updates for Patrons. With that all said, I once again plead with you to consider supporting me on Patreon, GitHub, or Ko-Fi. All the income from these will, for the foreseeable future, be going towards funding the SBCL port to the Switch as well as the current game project.

Thank you as always for reading, and I hope to share more exciting news with you soon!

13 Sep 2024 9:03am GMT

feedDjango community aggregator: Community blog posts

Cloud Migration Beginning - Building SaaS #202

In this episode, we started down the path of migrating School Desk off of Heroku and onto Digital Ocean. Most of the effort was on tool changes and beginning to make a Dockerfile for deploying the app to the new setup.

13 Sep 2024 5:00am GMT

11 Sep 2024

feedPlanet Twisted

Glyph Lefkowitz: Python macOS Framework Builds

When you build Python, you can pass various options to ./configure that change aspects of how it is built. There is documentation for all of these options, and they are things like --prefix to tell the build where to install itself, --without-pymalloc if you have some esoteric need for everything to go through a custom memory allocator, or --with-pydebug.

One of these options only matters on macOS, and its effects are generally poorly understood. The official documentation just says "Create a Python.framework rather than a traditional Unix install." But… do you need a Python.framework? If you're used to running Python on Linux, then a "traditional Unix install" might sound pretty good; more consistent with what you are used to.

If you use a non-Framework build, most stuff seems to work, so why should anyone care? I have mentioned it as a detail in my previous post about Python on macOS, but even I didn't really explain why you'd want it, just that it was generally desirable.

The traditional answer to this question is that you need a Framework build "if you want to use a GUI", but this is demonstrably not true. At first it might not seem so, since the go-to Python GUI test is "run IDLE"; many non-Framework builds also omit Tkinter because they don't ship a Tk dependency, so IDLE won't start. But other GUI libraries work fine. For example, uv tool install runsnakerun / runsnake will happily pop open a GUI window, Framework build or not. So it bears some explaining

Wait, what is a "Framework" anyway?

Let's back up and review an important detail of the mac platform.

On macOS, GUI applications are not just an executable file, they are organized into a bundle, which is a directory with a particular layout, that includes metadata, that launches an executable. A thing that, on Linux, might live in a combination of /bin/foo for its executable and /share/foo/ for its associated data files, is instead on macOS bundled together into Foo.app, and those components live in specified locations within that directory.

A framework is also a bundle, but one that contains a library. Since they are directories, Applications can contain their own Frameworks and Frameworks can contain helper Applications. If /Applications is roughly equivalent to the Unix /bin, then /Library/Frameworks is roughly equivalent to the Unix /lib.

App bundles are contained in a directory with a .app suffix, and frameworks are a directory with a .framework suffix.

So what do you need a Framework for in Python?

The truth about Framework builds is that there is not really one specific thing that you can point to that works or doesn't work, where you "need" or "don't need" a Framework build. I was not able to quickly construct an example that trivially fails in a non-framework context for this post, but I didn't try that many different things, and there are a lot of different things that might fail.

The biggest issue is not actually the Python.framework itself. The metadata on the framework is not used for much outside of a build or linker context. However, Python's Framework builds also ship with a stub application bundle, which places your Python process into a normal application(-ish) execution context all the time, which allows for various platform APIs like [NSBundle mainBundle] to behave in the normal, predictable ways that all of the numerous, various frameworks included on Apple platforms expect.

Various Apple platform features might want to ask a process questions like "what is your unique bundle identifier?" or "what entitlements are you authorized to access" and even beginning to answer those questions requires information stored in the application's bundle.

Python does not ship with a wrapper around the core macOS "cocoa" API itself, but we can use pyobjc to interrogate this. After installing pyobjc-framework-cocoa, I can do this

1
2
>>> import AppKit
>>> AppKit.NSBundle.mainBundle()

On a non-Framework build, it might look like this:

1
NSBundle </Users/glyph/example/.venv/bin> (loaded)

But on a Framework build (even in a venv in a similar location), it might look like this:

1
NSBundle </Library/Frameworks/Python.framework/Versions/3.12/Resources/Python.app> (loaded)

This is why, at various points in the past, GUI access required a framework build, since connections to the window server would just be rejected for Unix-style executables. But that was an annoying restriction, so it was removed at some point, or at least, the behavior was changed. As far as I can tell, this change was not documented. But other things like user notifications or geolocation might need to identity an application for preferences or permissions purposes, respectively. Even something as basic as "what is your app icon" for what to show in alert dialogs is information contained in the bundle. So if you use a library that wants to make use of any of these features, it might work, or it might behave oddly, or it might silently fail in an undocumented way.

This might seem like undocumented, unnecessary cruft, but it is that way because it's just basic stuff the platform expects to be there for a lot of different features of the platform.

/etc/ builds

Still, this might seem like a strangely vague description of this feature, so it might be helpful to examine it by a metaphor to something you are more familiar with. If you're familiar with more Unix style application development, consider a junior developer - let's call him Jim - asking you if they should use an "/etc build" or not as a basis for their Docker containers.

What is an "/etc build"? Well, base images like ubuntu come with a bunch of files in /etc, and Jim just doesn't see the point of any of them, so he likes to delete everything in /etc just to make things simpler. It seems to work so far. More experienced Unix engineers that he has asked react negatively and make a face when he tells them this, and seem to think that things will break. But their app seems to work fine, and none of these engineers can demonstrate some simple function breaking, so what's the problem?

Off the top of your head, can you list all the features that all the files that /etc is needed for? Why not? Jim thinks it's weird that all this stuff is undocumented, and it must just be unnecessary cruft.

If Jim were to come back to you later with a problem like "it seems like hostname resolution doesn't work sometimes" or "ls says all my files are owned by 1001 rather than the user name I specified in my Dockerfile" you'd probably say "please, put /etc back, I don't know exactly what file you need but lots of things just expect it to be there".

This is what a framework vs. a non-Framework build is like. A Framework build just includes all the pieces of the build that the macOS platform expects to be there. What pieces do what features need? It depends. It changes over time. And the stub that Python's Framework builds include may not be sufficient for some more esoteric stuff anyway. For example, if you want to use a feature that needs a bundle that has been signed with custom entitlements to access something specific, like the virtualization API, you might need to build your own app bundle. To extend our analogy with Jim, the fact that /etc exists and has the default files in it won't always be sufficient; sometimes you have to add more files to /etc, with quite specific contents, for some features to work properly. But "don't get rid of /etc (or your application bundle)" is pretty good advice.

Do you ever want a non-Framework build?

macOS does have a Unix subsystem, and many Unix-y things work, for Unix-y tasks. If you are developing a web application that mostly runs on Linux anyway and never care about using any features that touch the macOS-specific parts of your mac, then you probably don't have to care all that much about Framework builds. You're not going to be surprised one day by non-framework builds suddenly being unable to use some basic Unix facility like sockets or files. As long as you are aware of these limitations, it's fine to install non-Framework builds. I have a dozen or so Pythons on my computer at any given time, and many of them are not Framework builds.

Framework builds do have some small drawbacks. They tend to be larger, they can be a bit more annoying to relocate, they typically want to live in a location like /Library or ~/Library. You can move Python.framework into an application bundle according to certain rules, as any bundling tool for macOS will have to do, but it might not work in random filesystem locations. This may make managing really large number of Python versions more annoying.

Most of all, the main reason to use a non-Framework build is if you are building a tool that manages a fleet of Python installations to perform some automation that needs to know about Python installs, and you want to write one simple tool that does stuff on Linux and on macOS. If you know you don't need any platform-specific features, don't want to spend the (not insignificant!) effort to cover those edge cases, and you get a lot of value from that level of consistency (for example, a teaching environment or interdisciplinary development team with a lot of platform diversity) then a non-framework build might be a better option.

Why do I care?

Personally, I think it's important for Framework builds to be the default for most users, because I think that as much stuff should work out of the box as possible. Any user who sees a neat library that lets them get control of some chunk of data stored on their mac - map data, health data, game center high scores, whatever it is - should be empowered to call into those APIs and deal with that data for themselves.

Apple already makes it hard enough with their thicket of code-signing and notarization requirements for distributing software, aggressive privacy restrictions which prevents API access to some of this data in the first place, all these weird Unix-but-not-Unix filesystem layout idioms, sandboxing that restricts access to various features, and the use of esoteric abstractions like mach ports for communications behind the scenes. We don't need to make it even harder by making the way that you install your Python be a surprise gotcha variable that determines whether or not you can use an API like "show me a user notification when my data analysis is done" or "don't do a power-hungry data analysis when I'm on battery power", especially if it kinda-sorta works most of the time, but only fails on certain patch-releases of certain versions of the operating system, becuase an implementation detail of a proprietary framework changed in the meanwhile to require an application bundle where it didn't before, or vice versa.

More generally, I think that we should care about empowering users with local computation and platform access on all platforms, Linux and Windows included. This just happens to be one particular quirk of how native platform integration works on macOS specifically.


Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. For this one, thanks especially to long-time patron Hynek who requested it specifically. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like "how can we set up our Mac developers' laptops with Python".

11 Sep 2024 7:43pm GMT

05 Sep 2024

feedPlanet Lisp

Scott L. Burson: Equality and Comparison in FSet

This post is somewhat prompted by a recent blog post by vindarel, about Common Lisp's various built-in equality predicates. It is aleo related to Marco Antoniotti's CDR 8, Generic Equality and Comparison for Common Lisp, implemented by Charles Zhang; Alex Gutev's GENERIC-CL; and Henry Baker's well-known 1992 paper on equality.

Let me start by summarizing those designs. CDR 8 proposes a generic equaity function equals, and a comparison function compare. These are both CLOS generic functions intended to be user-extended, though they also have some predefined methods. equals has several keyword parameters controlling its exact behavior. One of these is case-sensitive, which controls string comparison. Another is recursive, which controls its behavior on conses; if recursive is false (the default), conses are compared by eq, but if it's true, a tree comparison is done. compare is specified to return one of the symbols <, >, =, or /= to indicate the relative order of its arguments; it also has keyword parameters such as case-sensitive and recursive.

GENERIC-CL replaces many CL operations with CLOS generic functions, and also adds new ones. It touches many parts of the language other than equality and comparison, but I'll leave those aside for now. It has two generic equality functions: equalp, which, notwithstanding the name, is case-sensitive for characters and strings, and likep, which is case-insensitive. It also has comparison predicates lessp etc., along with a compare function (implemented using lessp) that can return :less, :equal, or :greater.

Henry's paper makes some interesting arguments about how a Common Lisp equality predicate should behave; he makes these concrete by defining a novel predicate egal. His most salient point, for my purposes, is that mutable objects, including vectors and conses, should always be compared with eq. I will argue below that FSet adheres to the spirit of this desideratum even though not to its letter.

FSet advertises itself as a "set-theoretic" collections library, and as such, requires a well-defined notion of equality. Also, since it is implemented using balanced binary trees, it requires an ordering function. FSet defines a generic function compare with these properties:

FSet's equality predicate is equal?, which simply calls compare and checks that the result is :equal. Thus, the only step required to add a user-defined type to the FSet universe is to define a compare method for it. FSet provides a few utilities to help with this, which I'll go into below.

The cases in which compare returns :unequal to indicate unequal-but-incomparable arguments include:

If compare's default method is called with objects of different classes, it returns a result based solely on the classes; the contents of the objects are not examined. Again, it is part of FSet's design philosophy to give you as much freedom as reasonably possible; this includes allowing you to have sets containing more than one kind of object.

(In general, FSet's built-in ordering has been chosen for performance, not for its likely usefulness to clients. For example, compare on two strings of different lengths orders the shorter one first, ignoring the contents, because this requires only an O(1) operation.)

Comparison with equal

FSet's equal? on built-in CL types behaves almost identically to CL's equal, with the one difference that on vectors (other than bit-vectors), equal just calls eq, but equal? compares the contents. (I just noticed that this is not true for multidimensional arrays, and have filed an FSet bug.) (On bit-vectors, they both compare the contents.)

Comparison with CDR 8

There are noticeable similarities between FSet and the CDR 8 proposal; the latter not only includes a comparison function, but even provides for it to return /=, corresponding to FSet's :unequal, to indicate unequal but incomparable arguments. But the idea that the behavior of equality and comparison could be modified via keyword parameters does not seem appropriate for FSet. I think it would make FSet quite a bit harder to use, for little gain. For example, FSet comparison on lists walks the lists, but CDR 8, by default, just calls eq on their heads; users would have to remember to pass :recursive t to get the behavior they probably expect. FSet collections would have to remember which options they were created with, and if you tried, say, to take the union of two sets which used different options, you'd get an error.

Years of programming experience - not only with FSet but also with Refine, the little-known proprietary language that inspired FSet - have left me with the clear impression that having a single global equality predicate is a great simplification and very rarely limiting, provided it was defined properly to begin with.

I also note that FSet has more predefined methods for its comparison function (and therefore for its equality predicate) than are proposed in CDR 8. In particular, CDR 8's default compare methods return /= in more cases (e.g. distinct symbols), which is not terribly useful, in my view; FSet tries to minimize its use of :unequal because its data structure code, in that case, has to fall back to using alists, which have much poorer time complexity than binary trees. (OTOH, Marco seems to have overlooked the other cases listed above that arguably should be treated as unequal but incomparable.)

Comparison with GENERIC-CL

Again, there are noticeable similarities between FSet's and GENERIC-CL's equality predicates and comparison functions. GENERIC-CL does have two different equality predicates, equalp and likep, but these have no parameters other than the objects to be compared; it does not follow the CDR 8 suggestion of specifying keyword parameters that alter their behavior. Its equalp is very similar to FSet's equal?, but not quite identical; one difference is that it returns true when called on the integer 1 and the float 1.0, where both fset:equal? and cl:equal return false.

That normally-minor discrepancy is related to a larger deficiency: GENERIC-CL's comparison operator has no defined return value corresponding to :unequal, to indicate unequal-but-incomparable arguments. That is, FSet and CDR 8 both recognize that comparison can't implement a total ordering over all possible pairs of objects, but GENERIC-CL overlooks this point.

There are other overlaps between FSet and GENERIC-CL, but I'll save an analysis of those for another time.

Comparison with EGAL

Henry is proposing an extension to Common Lisp, not an operator that can be written in portable CL. This shows up in two ways: first, some of his sample code implementing egal requires access to implementation internals; second, he proposes a distinction between mutable and immutable vectors and strings that does not exist in CL. The text also suggests adding an immutable cons type to CL, though the sample code doesn't mention this case.

I agree with Henry in principle: a mutable cons (or string, or vector) is a very different beast from an immutable one; as he puts it, "eq is correct for mutable cons cells and equal is correct for immutable cons cells". CL would have been a better language, in principle, had conses been immutable, and immutable strings and vectors been available (if perhaps not the default). But here I must invoke one of my favorite quips: "The difference between theory and practice is never great in theory, but in practice it can be very great indeed." The key design goal of CL, to unify the Lisp community by providing a language into which existing programs in various Lisp dialects could easily be ported, demanded that conses remain mutable by default. Adding immutable versions of these types was not, to my knowledge, a priority.

And as Henry himself points out, in the overwhelmingly most common usage pattern for these types, they are treated as immutable once fully constructed. For example, a common idiom is for a function to build a list in reverse order, then pass it through nreverse before returning it; at that point, it is fully constructed, and it won't be modified thereafter. Obviously, this is a generalization over real-world Lisp programs and won't always be true, but since Lisp encourages sharing of structure, I think Lisp programmers learn pretty early that they have to be very careful when mutating a list or string or vector that they can't easily prove they're holding the only pointer to (normally by virtue of having just created it). Given that this is pretty close to being true in practice, and that comparing these aggregates by their contents is usually what people want to do when they use them as members of collections, it would seem odd for FSet to distinguish them by identity.

Also, there's the simple fact that for these built-in types, CL provides no portable way to order or hash them by identity. Such functionality must exist internally for use by eq and eql hash tables, but the language does not expose any portable interface to it.

So in this case, both programming convenience and the hard constraints of implementability force a choice that is contrary to theoretical purity: FSet must compare these types by their contents. The catch, of course, is that one must be careful, once having used a list or string or vector as an element of an FSet collection, never to modify it, lest one break the collection's ordering invariant. But in practice, this rule doesn't seem at all onerous: if you found the object in the heap somewhere - as opposed to having just created it- don't mutate it.

When it comes to user-defined types, however, the situation is quite different. It is easy for the programmer, defining a class intended for mutation, to arrange for FSet to distinguish objects of the class by their identity rather than their contents. The recommended way to do this is to include a serial-number slot that is initialized, at object-creation time, to the next value from an integer sequence; then write a compare method that uses this slot. (I'll show some examples shortly.)

So if the design of your program involves some pieces of mutable state that are placed in collections, my strong recommendation is that such state should never be implemented as a bare list or string or vector, but should always be wrapped in an instance of a user-defined class. I believe this to be a good design principle in general, even when FSet is not involved, but it becomes imperative for programs using FSet.

Adding Support for User-Defined Classes

When adding FSet support for a user-defined class, the first question is whether instances of the class represent mutable objects or mathematical values. If it's a mathematical value, it should be treated as immutable once constructed. (Alas, CL provides no way to enforce immutability.) In that case, it should be compared by content. FSet provides a convenient macro compare-slots for this purpose. Here's an example:

(defstruct frob
position
color)
(defmethod compare ((a frob) (b frob))
(compare-slots a b #'frob-position #'frob-color))

This specifies that frobs shall be ordered first by position, then by color. compare-slots handles the details for you, including the complications that arise if one of the slot value comparisons returns :unequal.

For standard classes, best performance is obtained by supplying slot names as quoted symbols rather than function-quoted accessor names:

(defclass directed-graph ()
((nodes :initarg :nodes :accessor digraph-nodes)
(edges :initarg :edges :accessor diagraph-edges)))
(defmethod compare ((a directed-graph) (b directed-graph))
(compare-slots a b 'nodes 'edges))

I am not sure whether to recommend the use of slot names for structure classes; the answer may depend on the implementation. At least on SBCL, you're probably better off using accessor functions for structs.

(Actually, the functions supplied don't have to be accessors; you could compare by some computed value instead, if you wanted. I haven't seen a use for this possibility in practice, though.)

Structure classes implementing mutable objects should do something like this:

(defvar *next-widget-serial-number* 0)
(defstruct widget
(serial-number (incf *next-widget-serial-number*))
...)
(defmethod compare ((a widget) (b widget))
(compare-slots a b #'widget-serial-number))

For standard classes implementing mutable objects, FSet provides an especially convenient solution: just include identity-ordering-mixin as a superclass:

(defclass thingy (identity-ordering-mixin ...) ...)

That's it!

More on FSet's Single Global Ordering

I sometimes get pushback, albeit mostly from people who haven't actually used FSet, about my design decision to have a single global ordering implemented by compare, rather than allowing collections to accept an ordering function when they are created. Let me defend this decision a little bit.

Because the ordering is extensible by defining new methods on compare, a programmer can always force a non-default ordering by defining a wrapper type. For example, if you have a map whose keys are strings and which you want to be maintained in lexicographic order, you can easily write a structure class to wrap the strings, and give that class a compare method that forces the strings to be compared lexicographically. (FSet even helps you out by providing a generic function compare-lexicographically, which you can just call.)

That said, I believe the need to write wrapper classes arises very rarely. It's needed only when there is a reason that a set or map needs to be continually maintained in the non-default order. If the non-default ordering is needed only occasionally - say, when the collection is being printed - it's usually easier to convert it to a list at that point (see FSet's generic function convert, about which I should write another blog post) and then just call sort or stable-sort on it.

And there is a wonderful simplicity to having the ordering be global. Ease of use is a very important design goal for FSet; collection-specific orderings would give the user another wrinkle to think about. I just don't see that the benefits, which seem to me very small, would outweigh the cost in cognitive load.

Perhaps the best way to put it is that FSet is primarily intended for application programming, not systems programming. The distinction is fuzzy, but broadly, if programmer productivity is more important to you than squeezing out the last few percent of performance, you're doing application programming, not systems programming. This is not necessarily a distinction about the kind of program being written - there certainly are applications that have performance-sensitive parts - but rather, about the amount of knowledge, experience, and mental effort required to write it. FSet is designed for general productivity, not necessarily for someone who needs maximal control to achieve maximal performance.

05 Sep 2024 6:58am GMT

04 Sep 2024

feedPlanet Twisted

Hynek Schlawack: Production-ready Python Docker Containers with uv

Starting with 0.3.0, Astral's uv brought many great features, including support for cross-platform lock files uv.lock. Together with subsequent fixes, it has become Python's finest workflow tool for my (non-scientific) use cases. Here's how I build production-ready containers, as fast as possible.

04 Sep 2024 10:00am GMT

03 Sep 2024

feedPlanet Twisted

Hynek Schlawack: How to Ditch Codecov for Python Projects

Codecov's unreliability breaking CI on my open source projects has been a constant source of frustration for me for years. I have found a way to enforce coverage over a whole GitHub Actions build matrix that doesn't rely on third-party services.

03 Sep 2024 12:00am GMT

30 Aug 2024

feedKernel Planet

Dave Airlie (blogspot): On Rust, Linux, developers, maintainers

There's been a couple of mentions of Rust4Linux in the past week or two, one from Linus on the speed of engagement and one about Wedson departing the project due to non-technical concerns. This got me thinking about project phases and developer types.

Archetypes:

I will regret making an analogy, in an area I have no experience in, but let's give it a go with a road building analogy.
Let's sort developers into 3 rough categories. Let's preface by saying not all developers fit in a single category throughout their careers, and some developers can do different roles on different projects, or on the same project simultaneously.

1. Wayfinders/Mapmakers

I want to go build a hotel somewhere but there exists no map or path. I need to travel through a bunch of mountains, valleys, rivers, weather, animals, friendly humans, antagonistic humans and some unknowns. I don't care deeply about them, I want to make a path to where I want to go. I hit a roadblock, I don't focus on it, I get around it by any means necessary and move onto the next one. I document the route by leaving maps, signs. I build a hotel at the end.

2. Road builders

I see the hotel and path someone has marked out. I foresee that larger volumes will want to traverse this path and build more hotels. The roadblocks the initial finder worked around, I have to engage with. I engage with each roadblock differently. I build a bridge, dig a tunnel, blow up some stuff, work with with/against humans, whatever is necessary to get a road built to the place the wayfinder built the hotel. I work on each roadblock until I can open the road to traffic. I can open it in stages, but it needs a completed road.

3. Road maintainers

I've got a road, I may have built the road initially. I may no longer build new roads. I've no real interest in hotels. I deal with intersections with other roads controlled by other people, I interact with builders who want to add new intersections for new roads, and remove old intersections for old roads. I fill in the holes, improve safety standards, handle the odd wayfinder wandering across my 8 lanes.

Interactions:

Wayfinders and maintainers is the most difficult interaction. Wayfinders like to move freely and quickly, maintainers have other priorities that slow them down. I believe there needs to be road builders engaged between the wayfinders and maintainers.

Road builders have to be willing to expend the extra time to resolving roadblocks in the best way possible for all parties. The time it takes to resolve a single roadblock may be greater than the time expended on the whole wayfinding expedition, and this frustrates wayfinders. The builder has to understand what the maintainers concerns are and where they come from, and why the wayfinder made certain decisions. They work via education and trust building to get them aligned to move past the block. They then move down the road and repeat this process until the road is open. How this is done might change depending on the type of maintainers.

Maintainer types:

Maintainers can fall into a few different groups on a per-new road basis, and how do road builders deal with existing road maintainers depends on where they are for this particular intersection:

1. Positive and engaged

Aligned with the goal of the road, want to help out, design intersections, help build more roads and more intersections. Will often have helped wayfinders out.

2. Positive with real concerns

Agrees with the road's direction, might not like some of the intersections, willing to be educated and give feedback on newer intersection designs. Moves to group 1 or trusts that others are willing to maintain intersections on their road.

3. Negative with real concerns

Don't agree fully with road's direction or choice of building material. Might have some resistance to changing intersections, but may believe in a bigger picture so won't actively block. Hopefully can move to 1 or 2 with education and trust building.

4. Negative and unwilling

Don't agree with the goal, don't want the intersection built, won't trust anyone else to care about their road enough. Education and trust building is a lot more work here, and often it's best to leave these intersections until later, where they may be swayed by other maintainers having built their intersections. It might be possible to build a reduced intersection. but if they are a major enough roadblock in a very busy road, then a higher authority might need to be brought in.

5. Don't care/Disengaged

Doesn't care where your road goes and won't talk about intersections. This category often just need to be told that someone else will care about it and they will step out of the way. If they are active blocks or refuse interaction then again a higher authority needs to be brought in.

Where are we now?

I think the r4l project has a had lot of excellent wayfinding done, has a lot of wayfinding in progress and probably has a bunch of future wayfinding to do. There are some nice hotels built. However now we need to build the roads to them so others can build hotels.
To the higher authority, the road building process can look slow. They may expect cars to be driving on the road already, and they see roadblocks from a different perspective. A roadblock might look smaller to them, but have a lot of fine details, or a large roadblock might be worked through quickly once it's engaged with.
For the wayfinders the process of interacting with maintainers is frustrating and slow, and they don't enjoy it as much as wayfinding, and because they still only care about the hotel at the end, when a maintainer gets into the details of their particular intersection they don't want to do anything but go stay in their hotel.
The road will get built, it will get traffic on it. There will be tunnels where we should have intersections, there will be bridges that need to be built from both sides, but I do think it will get built.

I think my request from this is that contributors should try and identify the archetype they currently resonate with and find the next group over to interact with.

For wayfinders, it's fine to just keep wayfinding, just don't be surprised when the road building takes longer, or the road that gets built isn't what you envisaged.

For road builder, just keep building, find new techniques for bridging gaps and blowing stuff up when appropriate. Figure out when to use higher authorities. Take the high road, and focus on the big picture.

For maintainers, try and keep up with modern road building, don't say 20 year old roads are the pinnacle of innovation. Be willing to install the rumble strips, widen the lanes, add crash guardrails, and truck safety offramps. Understand that wayfinders show you opportunities for longer term success and that road builders are going to keep building the road, and the result is better if you engage positively with them.

30 Aug 2024 1:52am GMT

23 Aug 2024

feedKernel Planet

Linux Plumbers Conference: Welcome to the Android Micro-conference!

Every year the Android Micro-conference brings the upstream Linux community and the Android systems developers together at the Linux Plumbers Conference. They discuss how they can effectively engage the existing issues and collaborate on upcoming changes to the Android platform and their upstream dependencies.

This year Android MC is scheduled to start at 10am on Friday, 20th Sep at Hall L1 (Austria Center). Attending Android MC gives you a chance to contribute to the broader discussion on Android platform ecosystem and Linux kernel development. You can share your own experiences, offer feedback, and help shape the future direction of these technologies.

Discussion topics for this year include:

Android MC will be followed by a Android BoF session, which will be a audience directed discussion. It can be a follow-up of the discussions from any of the Android MC topics or a free-form discussion on Android related topics.

23 Aug 2024 5:34pm GMT

07 Feb 2024

feedPlanet Plone - Where Developers And Integrators Write

PloneExpanse: Image scales wrongly regenerating

I had a problem with my Frankenstein stack of Plone 4 with various bits (core libraries) upgraded on it. Here's a description of my bug: When I upload an image and try to use it in a Volto block that referenced its image scales download url (such as @@images/<random-uuid4>.jpg) the image URL didn't work, it resulted in 404 error. When I reindexed the image in the catalog, then it worked. Now, the funky part is that I could reproduce the problem not only on my "doomed" Plone 4 stack, but also in the modern Plone 6 stack that we use for our main customer.

07 Feb 2024 6:06am GMT

29 Jan 2024

feedPlanet Plone - Where Developers And Integrators Write

PloneExpanse: Cleanup zc async

For my own reference, as I had to do a cleanup of zc.async tasks. The interface was too slow. # bin/zeo_client debug >>> queue = app._p_jar.root()['zc.async'][''] >>> from zc.async.queue import Queue >>> Queue.__init__(queue) >>> import transaction >>> transaction.commit() #for i in range(len(queue)): # queue.pull(0)

29 Jan 2024 1:11pm GMT

18 Oct 2023

feedPlanet Plone - Where Developers And Integrators Write

kitconcept GmbH: Plone Conference 2023 - Eibar

Group Photo Plone Conference 2023 in Eibar

The 2023 editon of the anual Plone conference happend from October 2nd to 8th in Eibar, Basque Country

The kitconcept team was present with 8 developers. Our team members gave plenty talks, keynotes and trainings.

kitconcept and friends dinner kitconcept and friends having team dinner

Trainings

Two days filled with trainings. Free for all conference attendees. This offer is unique in the Plone community and kitconcept team members were the backbone of the trainings about how to use Plone 6. From deployment to development to deep insides into how Volto and Plone 6 works.

Alok Kumar, Jakob Kahl: Volto and React

Alok Kumar and Jakob Kahl did a two day training to help developers get started with Volto and React:

https://2023.ploneconf.org/schedule/training/volto-and-react

Check out their trainings online if you want to catch up:

Érico Andrei : Installing Plone

Our colleague Érico Andrei gave a training about installing Plone on Day 2, the 3rd of October

https://2023.ploneconf.org/schedule/training/installing-plone

Víctor Fernandez de Alba, Tiberiu Ichim: Effective Volto

Víctor Fernandez de Alba is kitconcept CTO and the Volto Release Manager. Tiberiu Ichim is a Volto team member and one of the key contributors to Volto. They gave key insights into how to work effectively with Volto. If you have experience with Volto and you want to become a real pro, this is the training you should go to.

https://2023.ploneconf.org/schedule/training/effective-volto

https://training.plone.org/effective-volto/index.html

Day One

On day one, kitconcept team members presented two talks, including the main keynote of the day.

Keynote State of Plone

Team members Érico Andrei, Víctor Fernández de Alba and Timo Stollenwerk together with Maurits van Rees of Zest Software and Eric Steele of Salesforce presented the very first Keynote of the Conference titled "State of Plone".

Breaking boundaries: Plone as headless CMS by Víctor Fernández de Alba

Our colleague Víctor Fernández de Alba gave a presentation about the challenges faced by the Plone content management system in keeping up with modern frontend developments and the growing popularity of headless CMSs.

Breaking boundaries: Plone as headless CMS

https://2023.ploneconf.org/schedule/breaking-boundaries-plone-as-headless-cms

Day Two

Day Two was a informative Day, packed with interesting Talks, Panels and Presentations.

Volto Team Meeting

Panel: The Future of Search in Plone, 2023 Edition

Timo Stollenwerk, Sally Kleinfeldt, Tiberiu Ichim,, Eric Steele, Eric Bréhault, Rikupekka Oksanen, Érico Andrei and Guido Stevens hosted a very interesting Panel about the Future of Search Algorithms in Plone.

This panel provided a brief history and modern examples of Plone search, followed by a discussion of what improvements are needed - both from a marketing and technical perspective. This topic was first discussed at the 2011 conference and it was interesting to see how opinions had changed.

https://2023.ploneconf.org/schedule/the-future-of-search-in-plone-2023-edition

Alok Kumar : Is your Volto addon developer friendly ?

Meanwhile, kitconcept frontend developer Alok Kumar held a Presentation about what makes a developer friendly Volto Addon, and how we as a developer ourselfes can improve on the way we develop addons for Volto.

https://2023.ploneconf.org/schedule/is-your-volto-addon-developer-friendly

Rob Gietema : How to build a site using Nick

Later in the afternoon kitconcept developer Rob Gietema held an intriguing Talk about Nick, a headless CMS written in Node.js and how easy it is to build a website with it.

https://2023.ploneconf.org/schedule/how-to-build-a-site-using-nick

David Glick : Tales from a production Plone Cluster

Following Rob, kitconcept Employee David Glick shared some Details and Stories on hosting large Plone sites in a Docker Swarm Cluster.

https://2023.ploneconf.org/schedule/tales-from-a-production-plone-cluster

Érico Andrei : Unlocking the Power of plone.distribution : A Hands-On Guide

In this talk, Érico Andrei guided us through the feature-rich world of Plone Distributions.

https://2023.ploneconf.org/schedule/unlocking-the-power-of-plone-distribution-a-hands-on-guide

Local sport showcase and party

In the evening CodeSyntax organized a showcase of different local sports, including stone lifting, wood chopping and wood sawing. Timo represented kitconcept in this together with Phillip Bauer of Starzel. After that we concluded the day with cold drinks and Pinxos at the conference party.

Timo and Phillip at work

Day 3

Day 3 was filled with quite technical presentations, providing Information on the Cutting Edge Technology Plone has to offer.

Fred van Dijk : How the Plone Foundation Ai.team manages its websites with CI/CD

On the third Day of the Plone Conference, kitconcept Employee Fred van Dijk shared the News on automating a plone Release and how to host and operate a small Docker Swarm cluster using Plone.

https://2023.ploneconf.org/schedule/how-the-plone-foundation-ai-team-manages-its-websites-with-ci-cd

Víctor Fernández de Alba : volto-light-theme: Volto Theming, Reimagined

After a quick coffee break Víctor Fernández de Alba shared the progress on the Volto-Light-Theme and its inner workings.

https://2023.ploneconf.org/schedule/volto-light-theme-volto-theming-in-2023

Timo Stollenwerk : How we built the Website for the German Aerospace Center (DLR) in less than six months

CEO Timo Stollenwerk indulged us in the Story of the Challenges of migrating large, goverment-owned websites into a Plone project.

https://2023.ploneconf.org/schedule/how-we-built-the-website-for-the-german-aerospace-center-dlr-in-less-than-six-months

Érico Andrei : Testing your Plone codebase with Pytest

A little later in the afternoon, Érico Andrei presented us with a better, improved way to test Plone codebases.

https://2023.ploneconf.org/schedule/testing-your-plone-codebase-with-pytest

Rohit Kumar : Workflow Manager with Volto

In his presentation Rohit Kumar shared the progress regarding implementing a visual workflow Manager in Volto.

https://2023.ploneconf.org/schedule/workflow-manager-in-volto

Summary

The kitconcept team continues to drive innovation in the Plone community. Volto is the default frontend for Plone 6 and dominated the topics during the conference. We are happy to be part of such an amazing community.

18 Oct 2023 3:00pm GMT