16 Sep 2024
Planet Python
PyCharm: 7 Ways To Use Jupyter Notebooks inside PyCharm
Jupyter notebooks allow you to tell stories by creating and sharing data, equations, and visualizations sequentially, with a supporting narrative as you go through the notebook.
Jupyter notebooks in PyCharm Professional provide functionality above and beyond that of browser-based Jupyter notebooks, such as code completion, dynamic plots, and quick statistics, to help you explore and work with your data quickly and effectively.
Let's take a look at 7 ways you can use Jupyter notebooks in PyCharm to achieve your goals. They are:
- Creating or connecting to an existing notebook
- Importing your data
- Getting acquainted with your data
- Using JetBrains AI Assistant
- Exploring your code with PyCharm
- Getting insights from your code
- Sharing your insights and charts
The Jupyter notebook that we used in this demo is available on GitHub.
1. Creating or connecting to an existing notebook
You can create and work on your Jupyter notebooks locally or connect to one remotely with PyCharm. Let's take a look at both options so you can decide for yourself.
Creating a new Jupyter notebook
To work with a Jupyter notebook locally, you need to go to the Project tool window inside PyCharm, navigate to the location where you want to add the notebook, and invoke a new file. You can do this by using either your keyboard shortcuts ⌘N (macOS) / Alt+Ins (Windows/Linux) or by right-clicking and selecting New | Jupyter Notebook.
Give your new notebook a name, and PyCharm will open it ready for you to start work. You can also drag local Jupyter notebooks into PyCharm, and the IDE will automatically recognise them for you.
Connecting to a remote Jupyter notebook
Alternatively, you can connect to a remote Jupyter notebook by selecting Tools | Add Jupyter Connection. You can then choose to start a local Jupyter server, connect to an existing running local Jupyter server, or connect to a Jupyter server using a URL - all of these options are supported.
Now you have your Jupyter notebook, you need some data!
2. Importing your data
Data generally comes in two formats, CSV or database. Let's look at importing data from a CSV file first.
Importing from a CSV file
Polars and pandas are the two most commonly used libraries for importing data into Jupyter notebooks. I'll give you code for both in this section, and you can check out the documentation for both Polars and pandas and learn how Polars is different to pandas.
You need to ensure your CSV is somewhere in your PyCharm project, perhaps in a folder called `data`. Then, you can invoke import pandas and subsequently use it to read the code in:
import pandas as pd df = pd.read_csv("../data/airlines.csv")
In this example, airlines.csv
is the file containing the data we want to manipulate. To run this and any code cell in PyCharm, use ⇧⏎ (macOS) / Shift+Enter (Windows/Linux). You can also use the green run arrows on the toolbar at the top.
If you prefer to use Polars, you can use this code:
import polars as pl df = pl.read_csv("../data/airlines.csv")
Importing from a database
If your data is in a database, as is often the case for internal projects, importing it into a Jupyter notebook will require just a few more lines of code. First, you need to set up your database connection. In this example, we're using postgreSQL.
For pandas, you need to use this code to read the data in:
import pandas as pd engine = create_engine("postgresql://jetbrains:jetbrains@localhost/demo") df = pd.read_sql(sql=text("SELECT * FROM airlines"), con=engine.connect())
And for Polars, it's this code:
import polars as pl engine = create_engine("postgresql://jetbrains:jetbrains@localhost/demo") connection = engine.connect() query = "SELECT * FROM airlines" df = pl.read_database(query, connection)
3. Getting acquainted with your data
Now we've read our data in, we can take a look at the DataFrame or `df` as we will refer to it in our code. To print out the DataFrame, you only need a single line of code, regardless of which method you used to read the data in:
df
DataFrames
PyCharm displays your DataFrame as a table firstly so you can explore it. You can scroll horizontally through the DataFrame and click on any column header to order the data by that column. You can click on the Show Column Statistics icon on the right-hand side and select Compact or Detailed to get some helpful statistics on each column of data.
Dynamic charts
You can use PyCharm to get a dynamic chart of your DataFrame by clicking on the Chart View icon on the left-hand side. We're using pandas in this example, but Polars DataFrames also have the same option.
Click on the Show Series Settings icon (a cog) on the right-hand side to configure your plot to meet your needs:
In this view, you can hover your mouse over your data to learn more about it and easily spot outliers:
You can do all of this with Polars, too.
4. Using JetBrains AI Assistant
JetBrains AI Assistant has several offerings that can make you more productive when you're working with Jupyter notebooks inside PyCharm. Let's take a closer look at how you can use JetBrains AI Assistant to explain a DataFrame, write code, and even explain errors.
Explaining DataFrames
If you've got a DataFrame but are unsure where to start, you can click the purple AI icon on the right-hand side of the DataFrame and select Explain DataFrame. JetBrains AI Assistant will use its context to give you an overview of the DataFrame:
You can use the generated explanation to aid your understanding.
Writing Code
You can also get JetBrains AI Assistant to help you write code. Perhaps you know what kind of plot you want, but you're not 100% sure what the code should look like. Well, now you can use JetBrains AI Assistant to help you. Let's say you want to use 'matplotlib' to create a chart that finds the relationship between 'TimeMonthName' and 'MinutesDelayedWeather'. By specifying the column names, we're giving more context to the request which improves the reliability of the generated code. Try it with the following prompt:
Give me code using matplotlib to create a chart which finds the relationship between 'TimeMonthName' and 'MinutesDelayedWeather' for my dataframe df
If you like the resulting code, you can use the Insert Snippet at Caret button to insert the code and then run it:
import matplotlib.pyplot as plt # Assuming your data is in a DataFrame named 'df' # Replace 'df' with the actual name of your DataFrame if different # Plotting plt.figure(figsize=(10, 6)) plt.bar(df['TimeMonthName'], df['MinutesDelayedWeather'], color='skyblue') plt.xlabel('Month') plt.ylabel('Minutes Delayed due to Weather') plt.title('Relationship between TimeMonthName and MinutesDelayedWeather') plt.xticks(rotation=45) plt.grid(axis='y', linestyle='--', alpha=0.7) plt.tight_layout() plt.show()
If you don't want to open the AI Assistant tool window, you can use the AI cell prompt to ask your questions. For example, we can ask the same question here and get the code we need:
Explaining errors
You can also get JetBrains AI Assistant to explain errors for you. When you get an error, click Explain with AI:
You can use the resulting output to further your understanding of the problem and perhaps even get some code to fix it!
5. Exploring your code
PyCharm can help you get an overview of your Jupyter notebook, complete parts of your code to save your fingers, refactor it as required, debug it, and even add integrations to help you take it to the next level.
Tips for navigating and optimizing your code
Our Jupyter notebooks can grow large quite quickly, but thankfully you can use PyCharm's Structure view to see all your notebook's headings by clicking ⌘7 (macOS) / Alt+7 (Windows/Linux).
Code completion
Another helpful feature that you can take advantage of when using Jupyter notebooks inside PyCharm is code completion. You get both basic and type-based code completion out of the box with PyCharm, but you can also enable Full Line Code Completion in PyCharm Professional, which uses a local AI model to provide suggestions. Lastly, JetBrains AI Assistant can also help you write code and discover new libraries and frameworks.
Refactoring
Sometimes you need to refactor your code, and in that case, you only need to know one keyboard shortcut ⌃T (macOS) / Shift+Ctrl+Alt+T (Windows/Linux) then you can choose the refactoring you want to invoke. Pick from popular options such as Rename, Change Signature, and Introduce Variable, or lesser-known options such as Extract Method, to change your code without changing the semantics:
As your Jupyter notebook grows, it's likely that your import statements will also grow. Sometimes you might import a package such as polars
and numpy
, but forget that numpy
is a transitive dependency of the Polars library and as such, we don't need to import it separately.
To catch these cases and keep your code tidy, you can invoke Optimize Imports ⌃⌥O (macOS) / Ctrl+Alt+O (Windows/Linux) and PyCharm will remove the ones you don't need.
Debugging your code
You might not have used the debugger in PyCharm yet, and that's okay. Just know that it's there and ready to support you when you need to better understand some behavior in your Jupyter notebook.
Place a breakpoint on the line you're interested in by clicking in the gutter or by using ⌘F8 (macOS) / Ctrl+F8 (Windows/Linux), and then run your code with the debugger attached with the debug icon on the top toolbar:
You can also invoke PyCharm's debugger in your Jupyter notebook with ⌥⇧⏎ (macOS) / Shift+Alt+Enter (Windows/Linux). There are some restrictions when it comes to debugging your code in a Jupyter notebook, but please try this out for yourself and share your feedback with us.
Adding integrations into PyCharm
IDEs wouldn't be complete without the integrations you need. PyCharm Professional 2024.2 brings two new integrations to your workflow: DataBricks and HuggingFace.
You can enable the integrations with both Databricks and HuggingFace by going to your Settings <kbd>⌘</kbd> (macOS) / <kbd>Ctrl+Alt+S</kbd> (Windows/Linux), selecting Plugins and searching for the plugin with the corresponding name on the Marketplace tab.
6. Getting insights from your code
When analyzing your data, there's a difference between categorical and continuous variables. Categorical data has a finite number of discrete groups or categories, whereas continuous data is one continuous measurement. Let's look at how we can extract different insights from both the categorical and continuous variables in our airlines dataset.
Continuous variables
We can get a sense of how continuous data is distributed by looking at measures of the average value in that data and the spread of the data around the average. In normally distributed data, we can use the mean to measure the average and the standard deviation to measure the spread. However, when data is not distributed normally, we can get more accurate information using the median and the interquartile range (this is the difference between the seventy-fifth and twenty-fifth percentiles). Let's look at one of our continuous variables to understand the difference between these measurements.
In our dataset, we have lots of continuous variables, but we'll work with `NumDelaysLateAircraft` to see what we can learn. Let's use the following code to get some summary statistics for just that column:
df['NumDelaysLateAircraft'].describe()
Looking at this data, we can see that there is a big difference between the `mean` of ~789 and the 'median' (our fiftieth percentile, indicated by "50%" in the table below) of ~618.
This indicates a skew in our variable's distribution, so let's use PyCharm to explore it further. Click on the Chart View icon at the top left. Once the chart has been rendered, we'll change the series settings represented by the cog on the right-hand side of the screen. Change your x-axis to `NumDelaysLateAircraft` and your y-axis to `NumDelaysLateAircraft`.
Now drop down the y-axis using the little arrow and select `count`. The final step is to change the chart type to Histogram using the icons in the top-right corner:
Now that we can see the skew laid out visually, we can see that most of the time, the delays are not too excessive. However, we have a number of more extreme delays - one aircraft is an outlier on the right and it was delayed by 4,509 minutes, which is just over three days!
In statistics, the mean is very sensitive to outliers because it's a geometric average, unlike the median, which, if you ordered all observations in your variable, would sit exactly in the middle of these values. When the mean is higher than the median, it's because you have outliers on the right-hand side of the data, the higher side, as we had here. In such cases, the median is a better indicator of the true average delay, as you can see if you look at the histogram.
Categorical variables
Let's take a look at how we can use code to get some insights from our categorical variables. In order to get something that's a little more interesting than just `AirportCode`, we'll analyze how many aircraft were delayed by weather, `NumDelaysWeather`, in the different months of the year, `TimeMonthName`.
Use this code to group `NumDelaysWeather` with `TimeMonthName`:
result = df[['TimeMonthName', 'NumDelaysWeather']].groupby('TimeMonthName').sum() result
This gives us the DataFrame again in table format, but click the Chart View icon on the left-hand side of the PyCharm UI to see what we can learn:
This is okay, but it would be helpful to have the months ordered according to the Gregorian calendar. Let's first create a variable for the months that we expect:
month_order = [ "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December" ]
Now we can ask PyCharm to use the order that we've just defined in `month_order`:
# Convert the 'TimeMonthName' column to a categorical type with the specified order df["TimeMonthName"] = pd.Categorical(df["TimeMonthName"], categories=month_order, ordered=True) # Now you can group by 'TimeMonthName' and perform sum operation, specifying observed=False result = df[['TimeMonthName', 'NumDelaysWeather']].groupby('TimeMonthName', observed=False).sum() result
We then click on the Chart View icon once more, but something's wrong!
Are we really saying that there were no flights delayed in February? That can't be right. Let's check our assumption with some more code:
df['TimeMonthName'].value_counts()
Aha! Now we can see that `Febuary` has been misspelt in our data set, so the correct spelling in our variable name does not match. Let's update the spelling in our dataset with this code:
df["TimeMonthName"] = df["TimeMonthName"].replace("Febuary", "February") df['TimeMonthName'].value_counts()
Great, that looks right. Now we should be able to re-run our earlier code and get a chart view that we can interpret:
From this view, we can see that there is a higher number of delays during the months of December, January, and February, and then again in June, July, and August. However, we have not standardized this data against the total number of flights, so there may just be more flights in those months, which would cause these results along with an increased number of delays in those summer and winter months.
7. Sharing your insights and charts
When your masterpiece is complete, you'll probably want to export data, and you can do that in various ways with Jupyter notebooks in PyCharm.
Exporting a DataFrame
You can export a DataFrame by clicking on the down arrow on the right-hand side:
You have lots of helpful formats to choose from, including SQL, CSV, and JSON:
Exporting charts
If you prefer to export the interactive plot, you can do that too by clicking on the Export to PNG icon on the right-hand side:
Viewing your notebook as a browser
You can view your whole Jupyter notebook at any time in a browser by clicking the icon in the top-right corner of your notebook:
Finally, if you want to export your Jupyter notebook to a Python file, 2024.2 lets you do that too! Right-click on your Jupyter notebook in the Project tool window and select Convert to Python File. Follow the instructions, and you're done!
Summary
Using Jupyter notebooks inside PyCharm Professional provides extensive functionality, enabling you to create code faster, explore data easily, and export your projects in the formats that matter to you.
Download PyCharm Professional to try it out for yourself! Get an extended trial today and experience the difference PyCharm Professional can make in your data science endeavors.
Use the promo code "PyCharmNotebooks" at checkout to activate your free 60-day subscription to PyCharm Professional. The free subscription is available for individual users only.
16 Sep 2024 10:48am GMT
Zato Blog: Smart IoT integrations with Akenza and Python
Smart IoT integrations with Akenza and Python
Overview
The Akenza IoT platform, on its own, excels in collecting and managing data from a myriad of IoT devices. However, it is integrations with other systems, such as enterprise resource planning (ERP), customer relationship management (CRM) platforms, workflow management or environmental monitoring tools that enable a complete view of the entire organizational landscape.
Complementing Akenza's capabilities, and enabling the smooth integrations, is the versatility of Python programming. Given how flexible Python is, the language is a natural choice when looking for a bridge between Akenza and the unique requirements of an organization looking to connect its intelligent infrastructure.
This article is about combining the two, Akenza and Python. At the end of it, you will have:
- A bi-directional connection to Akenza using Python and WebSockets
- A Python service subscribed to and receiving events from IoT devices through Akenza
- A Python service that will be sending data to IoT devices through Akenza
Since WebSocket connections are persistent, their usage enhances the responsiveness of IoT applications which in turn helps to exchange occurs in real-time, thus fostering a dynamic and agile integrated ecosystem.
Python and Akenza WebSocket connections
First, let's have a look at full Python code - to be discussed later.
# -*- coding: utf-8 -*-
# Zato
from zato.server.service import WSXAdapter
# ###############################################################################################
# ###############################################################################################
if 0:
from zato.server.generic.api.outconn.wsx.common import OnClosed, \
OnConnected, OnMessageReceived
# ###############################################################################################
# ###############################################################################################
class DemoAkenza(WSXAdapter):
# Our name
name = 'demo.akenza'
def on_connected(self, ctx:'OnConnected') -> 'None':
self.logger.info('Akenza OnConnected -> %s', ctx)
# ###############################################################################################
def on_message_received(self, ctx:'OnMessageReceived') -> 'None':
# Confirm what we received
self.logger.info('Akenza OnMessageReceived -> %s', ctx.data)
# This is an indication that we are connected ..
if ctx.data['type'] == 'connected':
# .. for testing purposes, use a fixed asset ID ..
asset_id:'str' = 'abc123'
# .. build our subscription message ..
data = {'type': 'subscribe', 'subscriptions': [{'assetId': asset_id, 'topic': '*'}]}
ctx.conn.send(data)
else:
# .. if we are here, it means that we received a message other than type "connected".
self.logger.info('Akenza message (other than "connected") -> %s', ctx.data)
# ##############################################################################################
def on_closed(self, ctx:'OnClosed') -> 'None':
self.logger.info('Akenza OnClosed -> %s', ctx)
# ##############################################################################################
# ##############################################################################################
Now, deploy the code to Zato and create a new outgoing WebSocket connection. Replace the API key with your own and make sure to set the data format to JSON.
Receiving messages from WebSockets
The WebSocket Python services that you author have three methods of interest, each reacting to specific events:
-
on_connected - Invoked as soon as a WebSocket connection has been opened. Note that this is a low-level event and, in the case of Akenza, it does not mean yet that you are able to send or receive messages from it.
-
on_message_received - The main method that you will be spending most time with. Invoked each time a remote WebSocket sends, or pushes, an event to your service. With Akenza, this method will be invoked each time Akenza has something to inform you about, e.g. that you subscribed to messages, that
-
on_closed - Invoked when a WebSocket has been closed. It is no longer possible to use a WebSocket once it has been closed.
Let's focus on on_message_received, which is where the majority of action takes place. It receives a single parameter of type OnMessageReceived which describes the context of the received message. That is, it is in the "ctx" that you will both the current request as well as a handle to the WebSocket connection through which you can reply to the message.
The two important attributes of the context object are:
-
ctx.data - A dictionary of data that Akenza sent to you
-
ctx.conn - The underlying WebSocket connection through which the data was sent and through you can send a response
Now, the logic from lines 30-40 is clear:
-
First, we check if Akenza confirmed that we are connected (type=='connected'). You need to check the type of a message each time Akenza sends something to you and react to it accordingly.
-
Next, because we know that we are already connected (e.g. our API key was valid) we can subscribe to events from a given IoT asset. For testing purposes, the asset ID is given directly in the source code but, in practice, this information would be read from a configuration file or database.
-
Finally, for messages of any other type we simply log their details. Naturally, a full integration would handle them per what is required in given circumstances, e.g. by transforming and pushing them to other applications or management systems.
A sample message from Akenza will look like this:
INFO - WebSocketClient - Akenza message (other than "connected") -> {'type': 'subscribed',
'replyTo': None, 'timeStamp': '2023-11-20T13:32:50.028Z',
'subscriptions': [{'assetId': 'abc123', 'topic': '*', 'tagId': None, 'valid': True}],
'message': None}
How to send messages to WebSockets
An aspect not to be overlooked is communication in the other direction, that is, sending of messages to WebSockets. For instance, you may have services invoked through REST APIs, or perhaps from a scheduler, and their job will be to transform such calls into configuration commands for IoT devices.
Here is the core part of such a service, reusing the same Akenza WebSocket connection:
# -*- coding: utf-8 -*-
# Zato
from zato.server.service import Service
# ##############################################################################################
# ##############################################################################################
class DemoAkenzaSend(Service):
# Our name
name = 'demo.akenza.send'
def handle(self) -> 'None':
# The connection to use
conn_name = 'Akenza'
# Get a connection ..
with self.out.wsx[conn_name].conn.client() as client:
# .. and send data through it.
client.send('Hello')
# ##############################################################################################
# ##############################################################################################
Note that responses to the messages sent to Akenza will be received using your first service's on_message_received method - WebSockets-based messaging is inherently asynchronous and the channels are independent.
Now, we have a complete picture of real-time, IoT connectivity with Akenza and WebSockets. We are able to establish persistent, responsive connections to assets, we can subscribe to and send messages to devices, and that lets us build intelligent automation and integration architectures that make use of powerful, emerging technologies.
More resources
➤ Python API integration tutorial
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
16 Sep 2024 8:00am GMT
Django Weblog: Nominate a Djangonaut for the 2024 Malcolm Tredinnick Memorial Prize
Hello Everyone 👋 It is that time of year again when we recognize someone from our community in memory of our friend Malcolm.
Malcolm was an early core contributor to Django and had both a huge influence and impact on Django as we know it today. Besides being knowledgeable he was also especially friendly to new users and contributors. He exemplified what it means to be an amazing Open Source contributor. We still miss him to this day.
The prize
The Django Software Foundation Prizes page summarizes it nicely:
The Malcolm Tredinnick Memorial Prize is a monetary prize, awarded annually, to the person who best exemplifies the spirit of Malcolm's work - someone who welcomes, supports, and nurtures newcomers; freely gives feedback and assistance to others, and helps to grow the community. The hope is that the recipient of the award will use the award stipend as a contribution to travel to a community event -- a DjangoCon, a PyCon, a sprint -- and continue in Malcolm's footsteps.
Please make your nominations using our form: 2024 Malcolm Tredinnick Memorial Prize.
We will take nominations until Monday, September 30th, 2024, Anywhere on Earth, and will announce the winner(s) soon after the next DSF Board meeting in October. If you have any questions please reach out to the DSF Board at foundation@djangoproject.com.
16 Sep 2024 5:01am GMT
13 Sep 2024
Django community aggregator: Community blog posts
django-content-editor now supports nested sections
django-content-editor now supports nested sections
django-content-editor (and it's ancestor FeinCMS) has been the Django admin extension for editing content consisting of reusable blocks since 2009. In the last years we have more and more often started automatically grouping related items, e.g. for rendering a sequence of images as a gallery. But, sometimes it's nice to give editors more control. This has been possible by using blocks which open a subsection and blocks which close a subsection for a long time, but it hasn't been friendly to content managers, especially when using nested sections.
The content editor now has first-class support for such nested sections. Here's a screenshot showing the nesting:
Finally it's possible to visually group blocks into sections, collapse those sections as once and drag and drop whole sections into their place instead of having to select the involved blocks individually.
The best part about it is that the content editor still supports all Django admin widgets, as long as those widgets have support for the Django administration interface's inline form events! Moving DOM nodes around breaks attached JavaScript behaviors, but we do not actually move DOM nodes around after the initialization - instead, we use Flexbox ordering to visually reorder blocks. It's a bit more work than using a ready-made sortable plugin, but - as mentioned - the prize is that we don't break any other Django admin extensions.
Simple patterns
I previously already reacted to a blog post by Lincoln Loop here in my post My reaction to the block-driven CMS blog post.
The latest blog post, Solving the Messy Middle: a Simple Block Pattern for Wagtail CMS was interesting as well. It dives into the configuration of a Wagtail stream field which allows composing content out of reusable blocks of content (sounds familiar!). The result is saved in a JSON blob in the database with all the advantages and disadvantages that entails.
Now, django-content-editor is a worthy competitor when you do not want to add another interface to your website besides the user-facing frontend and the Django administration interface.
The example from the Lincoln Loop blog post can be replicated quite closely with django-content-editor by using sections. I'm using the django-json-schema-editor package for the section plugin since it easily allows adding more fields if some section type needs it.
Here's an example model definition:
# Models
from content_editor.models import Region, create_plugin_base
from django_json_schema_editor.plugins import JSONPluginBase
from feincms3 import plugins
class Page(models.Model):
# You have to define regions; each region gets a tab in the admin interface
regions = [Region(key="content", title="Content")]
# Additional fields for the page...
PagePlugin = create_plugin_base(Page)
class RichText(plugins.richtext.RichText, PagePlugin):
pass
class Image(plugins.image.Image, PagePlugin):
pass
class Section(JSONPluginBase, PagePlugin):
pass
AccordionSection = Section.proxy(
"accordion",
schema={"type": "object", {"properties": {"title": {"type": "string"}}}},
)
CloseSection = Section.proxy(
"close",
schema={"type": "object", {"properties": {}}},
)
Here's the corresponding admin definition:
# Admin
from content_editor.admin import ContentEditor
from django_json_schema_editor.plugins import JSONPluginInline
from feincms3 import plugins
@admin.register(models.Page)
class PageAdmin(ContentEditor):
inlines = [
plugins.richtext.RichTextInline.create(models.RichText),
plugins.image.ImageInline.create(models.Image),
JSONPluginInline.create(models.AccordionSection, sections=1),
JSONPluginInline.create(models.CloseSection, sections=-1),
]
The somewhat cryptic sections=
argument says how many levels of sections the individual blocks open or close.
To render the content including accordions I'd probably use a feincms3 renderer. At the time of writing the renderer definition for sections is a bit tricky.
from feincms3.renderer import RegionRenderer, render_in_context, template_renderer
class PageRenderer(RegionRenderer):
def handle(self, plugins, context):
plugins = deque(plugins)
yield from self._handle(plugins, context)
def _handle(self, plugins, context, *, in_section=False):
while plugins:
if isinstance(plugins[0], models.Section):
section = plugins.popleft()
if section.type == "close":
if in_section:
return
# Ignore close section plugins when not inside section
continue
if section.type == "accordion":
yield render_in_context("accordion.html", {
"title": accordion.data["title"],
"content": self._handle(plugins, context, in_section=True),
})
else:
yield self.render_plugin(plugin, context)
renderer = PageRenderer()
renderer.register(models.RichText, template_renderer("plugins/richtext.html"))
renderer.register(models.Image, template_renderer("plugins/image.html"))
renderer.register(models.Section, "")
Closing thoughts
Sometimes, I think to myself, I'll "just" write a "simple" blog post. I get what I deserve when using those forbidden words. This blog post is neither short or simple. That being said, the rendering code is a bit tricky, the rest is quite straightforward. The amount of code in django-content-editor and feincms3 is reasonable as well. Even though it may look like a lot you'll still be running less code in production than when using comparable solutions built using Django.
13 Sep 2024 5:00pm GMT
Django News - Python 3.13.0RC2 - Sep 13th 2024
News
Python 3.13.0RC2 and security updates for 3.8 through 3.12
Python 3.13.0RC2 and security updates for Python 3.12.6, 3.11.10, 3.10.15, 3.9.20, and 3.8.20 are now available!
DjangoCon US 2024 last call!
DjangoCon US starts September 22nd. It's the last call to buy an in-person or online ticket to attend this year!
Python in Visual Studio Code - September 2024 Release
The Python extension now supports Django unit tests.
Updates to Django
Today 'Updates to Django' is presented by Raffaella Suardini from Djangonaut Space!
Last week we had 12 pull requests merged into Django by 10 different contributors - including 4 first-time contributors! Congratulations to SirenityK, Mariatta, Wassef Ben Ahmed and github-user-en for having their first commits merged into Django - welcome on board!
Last chance to apply for Djangonaut Space 🚀
The application will close on September 14, for more information check this article that explains the selection process. Apply here
Django Newsletter
Sponsored Link 1
HackSoft - Your Django Development Partner Beyond Code
Elevate your Django projects with HackSoft! Try our expert consulting services and kickstart your project.
Articles
Django from first principles, part 18
The final post in a series on building and refactoring a Django blog site from scratch.
Django: rotate your secret key, fast or slow
Adam Johnson covers the two main ways to rotate secret keys, including a Django 4.1 feature that allows rotating to a new key whilst accepting data signed with the old one.
django-filter: filtering a foreign key model property
How to filter a foreign key model property with django-filter.
Django: a pattern for settings-configured API clients
How to get around the problem that an API client is instantiated as a module-level variable based on some settings.
UV with Django
Using UV to manage dependencies of your Django application.
Signatures are like backups · Alex Gaynor
"Backups don't matter, only restores matter."
Tutorials
Django-allauth: Site Matching Query Does Not Exist
How to fix a common configuration mistake in django-allauth.
Videos
Djangonaut Space Overview and Ask Me Anything (AMA)
This is an explanation of the Djangonaut Space program sessions, with a Q&A at the end. It has specific details on Session 3 of 2024, but the information is relevant for future sessions.
Session 3 applications are closed on September 14th, so apply if interested!
DjangoCon EU 2013 - Class-Based Views: Untangling the mess
This talk is from 2013, but it is still relevant to anyone dealing with function-based and (generic) class-based views. Russell Keith-Magee goes into the history of why GCBVs were added.
Sponsored Link 2
Try Scout APM for free!
Sick of performance issues? Enter Scout's APM tool for Python apps. Easily pinpoint and fix slowdowns with intelligent tracing logic. Optimize performance hassle-free, delighting your users.
Podcasts
Django Chat #165: Fall 2024 Podcast Relaunch
This mini-episode starts off the fall season and focuses on what's new in Django, upcoming DjangoCon US talks, thoughts on the User model, Carlton's new Stack Report newsletter, mentoring mentors, and more.
Django News Jobs
Back-end developers at ISM Fantasy Games 🆕
Python Engineer - API and SaaS Application Development at Aidentified, LLC
Software Developer at Habitat Energy
Senior Fullstack Python Engineer at Safety Cybersecurity
Django Newsletter
Projects
kennethlove/django-migrator
The Migrator project provides custom Django management commands to manage database migrations. It includes commands to revert and redo migrations for a specified app or the entire project.
carltongibson/django-unique-user-email
Enable login-by-email with the default User model for your Django project by making auth.User.email unique.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
13 Sep 2024 3:00pm GMT
Cloud Migration Beginning - Building SaaS #202
In this episode, we started down the path of migrating School Desk off of Heroku and onto Digital Ocean. Most of the effort was on tool changes and beginning to make a Dockerfile for deploying the app to the new setup.
13 Sep 2024 5:00am GMT
11 Sep 2024
Planet Twisted
Glyph Lefkowitz: Python macOS Framework Builds
When you build Python, you can pass various options to ./configure
that change aspects of how it is built. There is documentation for all of these options, and they are things like --prefix
to tell the build where to install itself, --without-pymalloc
if you have some esoteric need for everything to go through a custom memory allocator, or --with-pydebug
.
One of these options only matters on macOS, and its effects are generally poorly understood. The official documentation just says "Create a Python.framework rather than a traditional Unix install." But… do you need a Python.framework? If you're used to running Python on Linux, then a "traditional Unix install" might sound pretty good; more consistent with what you are used to.
If you use a non-Framework build, most stuff seems to work, so why should anyone care? I have mentioned it as a detail in my previous post about Python on macOS, but even I didn't really explain why you'd want it, just that it was generally desirable.
The traditional answer to this question is that you need a Framework build "if you want to use a GUI", but this is demonstrably not true. At first it might not seem so, since the go-to Python GUI test is "run IDLE"; many non-Framework builds also omit Tkinter because they don't ship a Tk dependency, so IDLE won't start. But other GUI libraries work fine. For example, uv tool install runsnakerun
/ runsnake
will happily pop open a GUI window, Framework build or not. So it bears some explaining
Wait, what is a "Framework" anyway?
Let's back up and review an important detail of the mac platform.
On macOS, GUI applications are not just an executable file, they are organized into a bundle, which is a directory with a particular layout, that includes metadata, that launches an executable. A thing that, on Linux, might live in a combination of /bin/foo
for its executable and /share/foo/
for its associated data files, is instead on macOS bundled together into Foo.app
, and those components live in specified locations within that directory.
A framework is also a bundle, but one that contains a library. Since they are directories, Applications can contain their own Frameworks and Frameworks can contain helper Applications. If /Applications
is roughly equivalent to the Unix /bin
, then /Library/Frameworks
is roughly equivalent to the Unix /lib
.
App bundles are contained in a directory with a .app
suffix, and frameworks are a directory with a .framework
suffix.
So what do you need a Framework for in Python?
The truth about Framework builds is that there is not really one specific thing that you can point to that works or doesn't work, where you "need" or "don't need" a Framework build. I was not able to quickly construct an example that trivially fails in a non-framework context for this post, but I didn't try that many different things, and there are a lot of different things that might fail.
The biggest issue is not actually the Python.framework
itself. The metadata on the framework is not used for much outside of a build or linker context. However, Python's Framework builds also ship with a stub application bundle, which places your Python process into a normal application(-ish) execution context all the time, which allows for various platform APIs like [NSBundle mainBundle]
to behave in the normal, predictable ways that all of the numerous, various frameworks included on Apple platforms expect.
Various Apple platform features might want to ask a process questions like "what is your unique bundle identifier?" or "what entitlements are you authorized to access" and even beginning to answer those questions requires information stored in the application's bundle.
Python does not ship with a wrapper around the core macOS "cocoa" API itself, but we can use pyobjc to interrogate this. After installing pyobjc-framework-cocoa
, I can do this
1 2 |
|
On a non-Framework build, it might look like this:
1 |
|
But on a Framework build (even in a venv in a similar location), it might look like this:
1 |
|
This is why, at various points in the past, GUI access required a framework build, since connections to the window server would just be rejected for Unix-style executables. But that was an annoying restriction, so it was removed at some point, or at least, the behavior was changed. As far as I can tell, this change was not documented. But other things like user notifications or geolocation might need to identity an application for preferences or permissions purposes, respectively. Even something as basic as "what is your app icon" for what to show in alert dialogs is information contained in the bundle. So if you use a library that wants to make use of any of these features, it might work, or it might behave oddly, or it might silently fail in an undocumented way.
This might seem like undocumented, unnecessary cruft, but it is that way because it's just basic stuff the platform expects to be there for a lot of different features of the platform.
/etc/
builds
Still, this might seem like a strangely vague description of this feature, so it might be helpful to examine it by a metaphor to something you are more familiar with. If you're familiar with more Unix style application development, consider a junior developer - let's call him Jim - asking you if they should use an "/etc
build" or not as a basis for their Docker containers.
What is an "/etc
build"? Well, base images like ubuntu
come with a bunch of files in /etc
, and Jim just doesn't see the point of any of them, so he likes to delete everything in /etc
just to make things simpler. It seems to work so far. More experienced Unix engineers that he has asked react negatively and make a face when he tells them this, and seem to think that things will break. But their app seems to work fine, and none of these engineers can demonstrate some simple function breaking, so what's the problem?
Off the top of your head, can you list all the features that all the files that /etc
is needed for? Why not? Jim thinks it's weird that all this stuff is undocumented, and it must just be unnecessary cruft.
If Jim were to come back to you later with a problem like "it seems like hostname resolution doesn't work sometimes" or "ls
says all my files are owned by 1001
rather than the user name I specified in my Dockerfile" you'd probably say "please, put /etc
back, I don't know exactly what file you need but lots of things just expect it to be there".
This is what a framework vs. a non-Framework build is like. A Framework build just includes all the pieces of the build that the macOS platform expects to be there. What pieces do what features need? It depends. It changes over time. And the stub that Python's Framework builds include may not be sufficient for some more esoteric stuff anyway. For example, if you want to use a feature that needs a bundle that has been signed with custom entitlements to access something specific, like the virtualization API, you might need to build your own app bundle. To extend our analogy with Jim, the fact that /etc
exists and has the default files in it won't always be sufficient; sometimes you have to add more files to /etc
, with quite specific contents, for some features to work properly. But "don't get rid of /etc
(or your application bundle)" is pretty good advice.
Do you ever want a non-Framework build?
macOS does have a Unix subsystem, and many Unix-y things work, for Unix-y tasks. If you are developing a web application that mostly runs on Linux anyway and never care about using any features that touch the macOS-specific parts of your mac, then you probably don't have to care all that much about Framework builds. You're not going to be surprised one day by non-framework builds suddenly being unable to use some basic Unix facility like sockets or files. As long as you are aware of these limitations, it's fine to install non-Framework builds. I have a dozen or so Pythons on my computer at any given time, and many of them are not Framework builds.
Framework builds do have some small drawbacks. They tend to be larger, they can be a bit more annoying to relocate, they typically want to live in a location like /Library
or ~/Library
. You can move Python.framework
into an application bundle according to certain rules, as any bundling tool for macOS will have to do, but it might not work in random filesystem locations. This may make managing really large number of Python versions more annoying.
Most of all, the main reason to use a non-Framework build is if you are building a tool that manages a fleet of Python installations to perform some automation that needs to know about Python installs, and you want to write one simple tool that does stuff on Linux and on macOS. If you know you don't need any platform-specific features, don't want to spend the (not insignificant!) effort to cover those edge cases, and you get a lot of value from that level of consistency (for example, a teaching environment or interdisciplinary development team with a lot of platform diversity) then a non-framework build might be a better option.
Why do I care?
Personally, I think it's important for Framework builds to be the default for most users, because I think that as much stuff should work out of the box as possible. Any user who sees a neat library that lets them get control of some chunk of data stored on their mac - map data, health data, game center high scores, whatever it is - should be empowered to call into those APIs and deal with that data for themselves.
Apple already makes it hard enough with their thicket of code-signing and notarization requirements for distributing software, aggressive privacy restrictions which prevents API access to some of this data in the first place, all these weird Unix-but-not-Unix filesystem layout idioms, sandboxing that restricts access to various features, and the use of esoteric abstractions like mach ports for communications behind the scenes. We don't need to make it even harder by making the way that you install your Python be a surprise gotcha variable that determines whether or not you can use an API like "show me a user notification when my data analysis is done" or "don't do a power-hungry data analysis when I'm on battery power", especially if it kinda-sorta works most of the time, but only fails on certain patch-releases of certain versions of the operating system, becuase an implementation detail of a proprietary framework changed in the meanwhile to require an application bundle where it didn't before, or vice versa.
More generally, I think that we should care about empowering users with local computation and platform access on all platforms, Linux and Windows included. This just happens to be one particular quirk of how native platform integration works on macOS specifically.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. For this one, thanks especially to long-time patron Hynek who requested it specifically. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like "how can we set up our Mac developers' laptops with Python".
11 Sep 2024 7:43pm GMT
04 Sep 2024
Planet Twisted
Hynek Schlawack: Production-ready Python Docker Containers with uv
Starting with 0.3.0, Astral's uv brought many great features, including support for cross-platform lock files uv.lock
. Together with subsequent fixes, it has become Python's finest workflow tool for my (non-scientific) use cases. Here's how I build production-ready containers, as fast as possible.
04 Sep 2024 10:00am GMT
03 Sep 2024
Planet Twisted
Hynek Schlawack: How to Ditch Codecov for Python Projects
Codecov's unreliability breaking CI on my open source projects has been a constant source of frustration for me for years. I have found a way to enforce coverage over a whole GitHub Actions build matrix that doesn't rely on third-party services.
03 Sep 2024 12:00am GMT
07 Feb 2024
Planet Plone - Where Developers And Integrators Write
PloneExpanse: Image scales wrongly regenerating
I had a problem with my Frankenstein stack of Plone 4 with various bits (core libraries) upgraded on it. Here's a description of my bug: When I upload an image and try to use it in a Volto block that referenced its image scales download url (such as @@images/<random-uuid4>.jpg) the image URL didn't work, it resulted in 404 error. When I reindexed the image in the catalog, then it worked. Now, the funky part is that I could reproduce the problem not only on my "doomed" Plone 4 stack, but also in the modern Plone 6 stack that we use for our main customer.
07 Feb 2024 6:06am GMT
29 Jan 2024
Planet Plone - Where Developers And Integrators Write
PloneExpanse: Cleanup zc async
For my own reference, as I had to do a cleanup of zc.async tasks. The interface was too slow. # bin/zeo_client debug >>> queue = app._p_jar.root()['zc.async'][''] >>> from zc.async.queue import Queue >>> Queue.__init__(queue) >>> import transaction >>> transaction.commit() #for i in range(len(queue)): # queue.pull(0)
29 Jan 2024 1:11pm GMT
18 Oct 2023
Planet Plone - Where Developers And Integrators Write
kitconcept GmbH: Plone Conference 2023 - Eibar
The 2023 editon of the anual Plone conference happend from October 2nd to 8th in Eibar, Basque Country
The kitconcept team was present with 8 developers. Our team members gave plenty talks, keynotes and trainings.
kitconcept and friends having team dinner
Trainings
Two days filled with trainings. Free for all conference attendees. This offer is unique in the Plone community and kitconcept team members were the backbone of the trainings about how to use Plone 6. From deployment to development to deep insides into how Volto and Plone 6 works.
Alok Kumar, Jakob Kahl: Volto and React
Alok Kumar and Jakob Kahl did a two day training to help developers get started with Volto and React:
https://2023.ploneconf.org/schedule/training/volto-and-react
Check out their trainings online if you want to catch up:
Érico Andrei : Installing Plone
Our colleague Érico Andrei gave a training about installing Plone on Day 2, the 3rd of October
https://2023.ploneconf.org/schedule/training/installing-plone
Víctor Fernandez de Alba, Tiberiu Ichim: Effective Volto
Víctor Fernandez de Alba is kitconcept CTO and the Volto Release Manager. Tiberiu Ichim is a Volto team member and one of the key contributors to Volto. They gave key insights into how to work effectively with Volto. If you have experience with Volto and you want to become a real pro, this is the training you should go to.
https://2023.ploneconf.org/schedule/training/effective-volto
https://training.plone.org/effective-volto/index.html
Day One
On day one, kitconcept team members presented two talks, including the main keynote of the day.
Keynote State of Plone
Team members Érico Andrei, Víctor Fernández de Alba and Timo Stollenwerk together with Maurits van Rees of Zest Software and Eric Steele of Salesforce presented the very first Keynote of the Conference titled "State of Plone".
Breaking boundaries: Plone as headless CMS by Víctor Fernández de Alba
Our colleague Víctor Fernández de Alba gave a presentation about the challenges faced by the Plone content management system in keeping up with modern frontend developments and the growing popularity of headless CMSs.
Breaking boundaries: Plone as headless CMS
https://2023.ploneconf.org/schedule/breaking-boundaries-plone-as-headless-cms
Day Two
Day Two was a informative Day, packed with interesting Talks, Panels and Presentations.
Volto Team Meeting
Panel: The Future of Search in Plone, 2023 Edition
Timo Stollenwerk, Sally Kleinfeldt, Tiberiu Ichim,, Eric Steele, Eric Bréhault, Rikupekka Oksanen, Érico Andrei and Guido Stevens hosted a very interesting Panel about the Future of Search Algorithms in Plone.
This panel provided a brief history and modern examples of Plone search, followed by a discussion of what improvements are needed - both from a marketing and technical perspective. This topic was first discussed at the 2011 conference and it was interesting to see how opinions had changed.
Alok Kumar : Is your Volto addon developer friendly ?
Meanwhile, kitconcept frontend developer Alok Kumar held a Presentation about what makes a developer friendly Volto Addon, and how we as a developer ourselfes can improve on the way we develop addons for Volto.
https://2023.ploneconf.org/schedule/is-your-volto-addon-developer-friendly
Rob Gietema : How to build a site using Nick
Later in the afternoon kitconcept developer Rob Gietema held an intriguing Talk about Nick, a headless CMS written in Node.js and how easy it is to build a website with it.
https://2023.ploneconf.org/schedule/how-to-build-a-site-using-nick
David Glick : Tales from a production Plone Cluster
Following Rob, kitconcept Employee David Glick shared some Details and Stories on hosting large Plone sites in a Docker Swarm Cluster.
https://2023.ploneconf.org/schedule/tales-from-a-production-plone-cluster
Érico Andrei : Unlocking the Power of plone.distribution : A Hands-On Guide
In this talk, Érico Andrei guided us through the feature-rich world of Plone Distributions.
https://2023.ploneconf.org/schedule/unlocking-the-power-of-plone-distribution-a-hands-on-guide
Local sport showcase and party
In the evening CodeSyntax organized a showcase of different local sports, including stone lifting, wood chopping and wood sawing. Timo represented kitconcept in this together with Phillip Bauer of Starzel. After that we concluded the day with cold drinks and Pinxos at the conference party.
Day 3
Day 3 was filled with quite technical presentations, providing Information on the Cutting Edge Technology Plone has to offer.
Fred van Dijk : How the Plone Foundation Ai.team manages its websites with CI/CD
On the third Day of the Plone Conference, kitconcept Employee Fred van Dijk shared the News on automating a plone Release and how to host and operate a small Docker Swarm cluster using Plone.
https://2023.ploneconf.org/schedule/how-the-plone-foundation-ai-team-manages-its-websites-with-ci-cd
Víctor Fernández de Alba : volto-light-theme: Volto Theming, Reimagined
After a quick coffee break Víctor Fernández de Alba shared the progress on the Volto-Light-Theme and its inner workings.
https://2023.ploneconf.org/schedule/volto-light-theme-volto-theming-in-2023
Timo Stollenwerk : How we built the Website for the German Aerospace Center (DLR) in less than six months
CEO Timo Stollenwerk indulged us in the Story of the Challenges of migrating large, goverment-owned websites into a Plone project.
Érico Andrei : Testing your Plone codebase with Pytest
A little later in the afternoon, Érico Andrei presented us with a better, improved way to test Plone codebases.
https://2023.ploneconf.org/schedule/testing-your-plone-codebase-with-pytest
Rohit Kumar : Workflow Manager with Volto
In his presentation Rohit Kumar shared the progress regarding implementing a visual workflow Manager in Volto.
https://2023.ploneconf.org/schedule/workflow-manager-in-volto
Summary
The kitconcept team continues to drive innovation in the Plone community. Volto is the default frontend for Plone 6 and dominated the topics during the conference. We are happy to be part of such an amazing community.
18 Oct 2023 3:00pm GMT