06 Nov 2025

feedPlanet Python

Django Weblog: 2026 DSF Board Candidates

Thank you to the 19 individuals who have chosen to stand for election. This page contains their candidate statements submitted as part of the 2026 DSF Board Nominations.

Our deepest gratitude goes to our departing board members who are at the end of their term and chose not to stand for re-elections: Sarah Abderemane and Thibaud Colas; thank you for your contributions and commitment to the Django community ❤️.

Those eligible to vote in this election will receive information on how to vote shortly. Please check for an email with the subject line "2026 DSF Board Voting". Voting will be open until 23:59 on November 26, 2025 Anywhere on Earth.

Any questions? Reach out on our dedicated forum thread or via email to foundation@djangoproject.com.

All candidate statements

To make it simpler to review all statements, here they are as a list of links. Voters: please take a moment to read all statements before voting! It will take some effort to rank all candidates on the ballot. We believe in you.

  1. Aayush Gauba (he/him) - St. Louis, MO
  2. Adam Hill (he/him) - Alexandria, VA
  3. Andy Woods (he/they) - UK
  4. Apoorv Garg (he/him) - India, now living in Japan
  5. Ariane Djeupang (she/her) - Cameroon
  6. Arunava Samaddar (he/him) - India
  7. Chris Achinga (he/him) - Mombasa, Kenya
  8. Dinis Vasco Chilundo (he/him) - Cidade de Inhambane, Mozambique
  9. Jacob Kaplan-Moss (he/him) - Oregon, USA
  10. Julius Nana Acheampong Boakye (he/him) - Ghana
  11. Kyagulanyi Allan (he/him) - Kampala, Uganda
  12. Nicole Buque (she) - Maputo, Mozambique
  13. Nkonga Morel (he/him) - Cameroun
  14. Ntui Raoul Ntui Njock (he/his) - Buea, Cameroon
  15. Priya Pahwa (she/her) - India, Asia
  16. Quinter Apondi Ochieng (she) - Kenya-Kisumu City
  17. Rahul Lakhanpal (he/him) - Gurugram, India
  18. Ryan Cheley (he/him) - California, United States
  19. Sanyam Khurana (he/him) - Toronto, Canada

Aayush Gauba (he/him) St. Louis, MO

View personal statement

I'm Aayush Gauba, a Django developer and Djangonaut Space mentee passionate about open source security and AI integration in web systems. I've spoken at DjangoCon US and actively contribute to the Django community through projects like AIWAF. My focus is on using technology to build safer and more inclusive ecosystems for developers worldwide.

Over the past few years, I've contributed to multiple areas of technology ranging from web development and AI security to research in quantum inspired computing. I've presented talks across these domains, including at DjangoCon US, where I spoke about AI powered web security and community driven innovation. Beyond Django, I've published academic papers exploring the intersection of ethics, quantum AI, and neural architecture design presented at IEEE and other research venues. These experiences have helped me understand both the technical and philosophical challenges of building responsible and transparent technology. As a Djangonaut Space mentee, I've been on the learning side of Django's mentorship process and have seen firsthand how inclusive guidance and collaboration can empower new contributors. I bring a perspective that connects deep research with community growth and balancing innovation with the values that make Django strong: openness, ethics, and accessibility.

As part of the DSF board, I would like to bridge the gap between experienced contributors and new voices. I believe mentorship and accessibility are key to Django's future. I would also like to encourage discussions around responsible AI integration, web security, and community growth ensuring Django continues to lead both technically and ethically. My goal is to help the DSF stay forward looking while staying true to its open, supportive roots.

Adam Hill (he/him) Alexandria, VA

View personal statement

I have been a software engineer for over 20 years and have been deploying Django in production for over 10. When not writing code, I'm probably playing pinball, watching a movie, or shouting into the void on social media.

I have been working with Django in production for over 10 years at The Motley Fool where I am a Staff Engineer. I have also participated in the Djangonauts program for my Django Unicorn library, gave a talk at DjangoCon EU (virtual) and multiple lightning talks at DjangoCon US conferences, built multiple libraries for Django and Python, have a semi-regularly updated podcast about Django with my friend, Sangeeta, and just generally try to push the Django ecosystem forward in positive ways.

The key issue I would like to get involved with is updating the djangoproject.com website. The homepage itself hasn't changed substantially in over 10 years and I think Django could benefit from a fresh approach to selling itself to developers who are interested in a robust, stable web framework. I created a forum post around this here: Want to work on a homepage site redesign?. I also have a document where I lay out some detailed ideas about the homepage here: Django Homepage Redesign.

Andy Woods (he/they) UK

View personal statement

I'm am based in academia and am a senior Creative Technologist and Psychologist. I have a PhD in Multisensory Perception. I make web apps and love combining new technologies. I've worked in academia (Sheffield, Dublin, Bangor, Manchester, Royal Holloway), industry (Unilever, NL), and founded three technology-based startups. I am proud of my neurodiversity.

I was on the review team of DjangoCon Europe 2021. I have had several blog posts included on the Django Newsletter (e.g. django htmx modal popup loveliness). I have written a scientific article on using Django for academic research (under peer review). I have several projects mentioned on Django Packages e.g. MrBenn Toolbar Plugin. I am part of a cohort of people who regularly meet to discuss python based software they are developing in the context of startups, started by Michael Kennedy. Here is an example of an opensource django-based project I am developing there: Tootology.

I am keen on strengthening the link between Django and the academic community. Django has enormous potential as a research and teaching tool, but us academics don't know about this! I would like to make amends by advocating for members of our community to appear on academic podcasts and social media platforms to promote Django's versatility, and to reach new audiences.

In my professional life, I lead work on Equality, Diversity, and Inclusion, and am committed to creating fair and supportive environments. I will bring this to the DSF. The Django community already takes great strides in this area, and I would like to build upon this progress. Python recently turning down a $1.5 million grant, which I feels exemplifies the awesomeness of the greater community we are a part of.

Apoorv Garg (he/him) India, now living in Japan

View personal statement

I'm Apoorv Garg, a Django Software Foundation Member and open source advocate. I actively organize and volunteer for community events around Django, Grafana, and Postgres. Professionally, I work as a software engineer at a startup, focusing on building scalable backend systems and developer tools. I'm also part of the Google Summer of Code working groups with Django and JdeRobot, contributing to mentorship and open source development for over four years.

I have been actively speaking at various tech communities including Python, FOSSASIA, Django, Grafana, and Postgres. Over time, I've gradually shifted from just speaking to also organizing and volunteering at these community events, helping others get involved and build connections around open source technologies.

Beyond work, I've been mentoring students through Google Summer of Code with Django and JdeRobot. I also teach high school students the fundamentals of Python, Django, and robotics, helping them build curiosity and confidence in programming from an early stage.

Last year, I joined the Accessibility Working Group of the World Wide Web Consortium (W3C), which focuses on improving web accessibility standards and ensuring inclusive digital experiences for all users. My goal is to bring these learnings into the Django ecosystem, aligning its community and tools with global accessibility best practices.

Looking at the issues, I believe the opportunity of Google Summer of Code is currently very limited in Django. I know Django already has a lot of contributions, but being part of the core members in the JdeRobot organization, which is a small open source group, I understand the pain points we face when trying to reach that level of contribution. The way we utilize GSoC in JdeRobot has helped us grow, improve productivity, and bring in long-term contributors. I believe Django can benefit from adopting a similar approach.

Funding is another major issue faced by almost every open source organization. There are continuous needs around managing grants for conferences, supporting local communities and fellows, and sponsoring initiatives that strengthen the ecosystem. Finding sustainable ways to handle these challenges is something I want to focus on.

I also plan to promote Django across different open source programs. In my opinion, Django should not be limited to Python or Django-focused events. It can and should have a presence in database and infrastructure communities such as Postgres, Grafana, FOSSASIA, and W3C conferences around the world. This can help connect Django with new audiences and create more collaboration opportunities.

Ariane Djeupang (she/her) Cameroon

View personal statement

I'm Ariane Djeupang from Cameroon (Central Africa) , a ML Engineer, Project Manager, and Community Organizer passionate about building sustainable, inclusive tech ecosystems across Africa. As a Microsoft MVP in the Developer Technologies category, an active DSF member and a leader in open source communities, I believe in the power of collaboration, documentation, and mentorship to unlock global impact.

My efforts focus on lowering the barriers to meaningful participation. My work sits at the intersection of production engineering, clear technical communication, and community building. I've spent years building ML production-ready systems with Django, FastAPI, Docker, cloud platforms, and also ensuring that the knowledge behind those systems is accessible to others. I've turned complex workflows into approachable, accessible guides and workshops that empower others to build confidently. I've also collaborated with global networks to promote ethical ML/AI and sustainable tech infrastructure in resource-constrained environments.

Through my extensive experience organizing major events like: DjangoCon Africa, UbuCon Africa, PyCon Africa, DjangoCon US, EuroPython, I've created inclusive spaces where underrepresented voices lead, thrive and are celebrated. This has equipped me with the skills and insights needed to drive inclusivity, sustainability and community engagement. I volunteer on both the DSF's CoC and the D&I (as Chair) working groups. I also contribute to the scientific community through projects like NumPy, Pandas, SciPy, the DISCOVER COOKBOOK (under NumFOCUS' DISC Program).

As the very first female Cameroonian to be awarded Microsoft MVP, this recognition reflects years of consistent contribution, technical excellence, and community impact. The program connects me with a global network that I actively leverage to bring visibility, resources, and opportunities back to Django and Python communities, bridging local initiatives with global platforms to amplify Django's reach and relevance. It demonstrates that my work is recognized at the highest levels of the industry.

As a young Black African woman in STEM from a region of Africa with pretty limited resources and an active DSF member, I've dedicated my career to fostering inclusivity and representation in the tech and scientific spaces and I am confident that I bring a unique perspective to the table.

I will push the DSF to be more than a steward of code, to be a catalyst for global belonging. My priorities are:

  • Radical inclusion: I'll work to expand resources and support for contributors from underrepresented regions, especially in Africa, Latin America, and Southeast Asia. This includes funding for local events, mentorship pipelines, and multilingual documentation sprints.
  • Sustainable community infrastructure: I'll advocate for sustainable models of community leadership, ones that recognize invisible labor, prevent burnout, and promote distributed governance. We need to rethink how we support organizers, maintainers, and contributors beyond code.
  • Ethical tech advocacy: I'll help the DSF navigate the ethical dimensions of Django's growing role in AI and data-driven systems. From privacy to fairness, Django can lead by example. And I'll work to ensure our framework reflects our values.
  • Global partnerships: I want to strengthen partnerships with regional communities and allied open-source foundations, ensuring Django's growth is global and socially conscious.

I will bring diversity, a young and energized spirit that I think most senior boards lack. My vision is for the DSF to not only maintain Django but to set the standard for inclusive, ethical, and sustainable open source. My goal is simple: to make Django the most welcoming, resilient, and socially conscious web framework in the world.

Arunava Samaddar (he/him) India

View personal statement

Information Technology Experience 15 years

Microsoft Technology Python MongoDB Cloud Technology Testing People Manager Supervisor L2 Production Support and Maintenance

Well experience in software sales product delivery operations Agile Scrum and Marketing.

Chris Achinga (he/him) Mombasa, Kenya

View personal statement

I am a software developer, primarily using Python and Javascript, building web and mobile applications. At my workplace, I lead the Tech Department and the Data team.

I love developer communities and supported emerging developers through meetups, training, and community events including PyCon Kenya, local Django Meetup and university outreach.

At Swahilipot Hub, I built internal tools, supported digital programs, and mentored over 300 young developers through industrial attachment programs. I primarily use Django and React to development internal tools, websites (Swahilipot Hub) including our radio station site (Swahilipot FM).

I also work with Green World Campaign Kenya on the AIRS platform, where we use AI, cloud technology, and blockchain to support environmental projects and rural communities.

Outside of engineering, I write technical content and actively organise and support developer communities along the Kenyan coast to help more young people grow into tech careers - Chris Achinga's Articles and Written Stuff

I would want to get involved more on the community side, diversity in terms of regional presentation and awareness of Django, and the Django Software Foundation. In as much as they's a lot of efforts in place. With no available African entity of the DSF, this would make it difficult for companies/organization in Africa to donate and support the DSF, I would love to champion for that and pioner towards that direction, not only for Africa but also for other under-represented geographical areas.

I wasn't so sure about this last year, but I am more confident, with a better understanding of the Django ecosystem and I know I have the capabilities of getting more contributions to Django, both financially and code-wise. I would also love to make sure that Django and the ecosystem is well know through proper communication channels, I know this differs based on different countries, the goal is to make sure that the DSF is all over, of course, where we are needed. Create the feeling that Django is for everyone, everywhere!

Dinis Vasco Chilundo (he/him) Cidade de Inhambane, Mozambique

View personal statement

I am a Rural Engineer from Universidade Eduardo Mondlane with practical experience in technology, data management, telecommunications, and sustainabilitty

In recent years, I have worked as a trainer and coach, as well as a researcher, empowering young people with programming, digital skills, and data analysis. I have also contributed to open-source projects, promoting access to technology and remote learning in several cities across Mozambique. These experiences have strengthened my belief in the power of open-source communities to create opportunities, foster collaboration, and drive innovation in regions with limited resources.

The thing I want the DSF to do is to expand its support for students and early career professionals.Personally, what I want to achieve is collaboration and transparency in actions as integrity is non negotiable.

Jacob Kaplan-Moss (he/him) Oregon, USA

View personal statement

I was one of the original maintainers of Django, and was the original founder and first President of the DSF. I re-joined the DSF board in 2023, and have served as Treasurer since 2024. I used to be a software engineer and security consultant (REVSYS, Latacora, 18F, Heroku), before mostly retiring from tech in 2025 to become an EMT.

I've been a member of the DSF Board for 3 years, so I bring some institutional knowledge there. I've been involved in the broader Django community as long as there has been a Django community, though the level of my involvement has waxed and waned. The accomplishments I'm the most proud of in the Django community are creating our Code of Conduct (djangoproject.com/conduct/), and more recently establishing the DSF's Working Groups model (django/dsf-working-groups).

Outside of the Django community, I have about 15 years of management experience, at companies small and large (and also in the US federal government).

I'm running for re-election with three goals for the DSF: (a) hire an Executive Director, (b) build more ""onramps"" into the DSF and Django community, and (c) expand and update our Grants program.

Hire an ED: this is my main goal for 2026, and the major reason I'm running for re-election. The DSF has grown past the point where being entirely volunteer-ran is working; we need transition the organization towards a more professional non-profit operation. Which means paid staff. Members of the Board worked on this all throughout 2025, mostly behind the scenes, and we're closer than ever -- but not quite there. We need to make this happen in 2026.

Build ""onramps"": this was my main goal when I ran in 2024 (see my statement at 2024 DSF Board Candidates). We've had some success there: several Working Groups are up and running, and over on the technical side we helped the Steering Council navigate a tricky transition, and they're now headed in a more positive direction. I'm happy with our success there, but there's still work to do; helping more people get involved with the DSF and Django would continue to be a high-level goal of mine. And, I'd like to build better systems for recognition of people who contribute to the DSF/Django - there are some incredible people working behind the scenes that most of the community has heard of.

Expand and update our grants program: our grants program is heavily overdue for a refresh. I'd like to update our rules and policies, make funding decisions clearer and less ad-hoc, increase the amount of money we're giving per grant, and (funding allowing) expand to to other kinds of grants (e.g. travel grants, feature grants, and more). I'd also like to explore turning over grant decisions to a Working Group (or subcommittee of the board), to free up Board time for more strategic work.

Julius Nana Acheampong Boakye (he/him) Ghana

View personal statement

I'm a proud Individual Member of the Django Software Foundation and a full-stack software engineer with a strong focus on Django and mobile development. Beyond code, I'm deeply involved in the global Python and Django , Google & FlutterFlow communities, actively contributing to the organization of several major conferences around the world.

I am a passionate full-stack software engineer with a strong focus on Django and mobile development. Over the years, I've contributed to the global Python and Django communities through volunteering, organizing, and speaking. I served as the Opportunity Grant Co-Chair for DjangoCon US (2024 & 2025), where I help ensure accessibility and inclusion for underrepresented groups. I also helped Organise DjangoCon Europe, where my impact was felt (see LinkedIn post)

I was also the as Design Lead for PyCon Africa 2024 and PyCon Ghana 2025 , where i worked everything designs to make the conference feel like home (see LinkedIn post) and I also helped organise other regional events, including DjangoCon Africa, PyCon Namibia, PyCon Portugal and etc. Beyond organising, I've spoken at several local and international conferences, sharing knowledge and promoting community growth including PyCon Africa, DjangoCon Africa, PyCon Nigeria, and PyCon Togo.

I'm also an Individual Member of the Django Software Foundation, and my work continues to center on empowering developers, building open communities, and improving access for newcomers in tech.

As a board member, I want to help strengthen Django's global community by improving accessibility, diversity, and engagement especially across regions where Django adoption is growing but still lacks strong community infrastructure, such as Africa and other underrepresented areas.

My experience as Opportunity Grant Co-Chair for DjangoCon US and Design Lead for PyCon Africa has shown me how powerful community-driven support can be when it's backed by inclusion and transparency. I want the DSF to continue building bridges between developers, organizers, and contributors making sure that everyone, regardless of location or background, feels seen and supported.

I believe the DSF can take a more active role in empowering local communities, improving mentorship pathways, and creating better visibility for contributors who work behind the scenes. I also want to support initiatives that make Django more approachable to new developers through clearer learning materials and global outreach programs.

Personally, I want to help the DSF improve communication with international communities, expand partnerships with educational programs and tech organizations, and ensure the next generation of developers see Django as not just a framework, but a welcoming and sustainable ecosystem.

My direction for leadership is guided by collaboration, empathy, and practical action building on Django's strong foundation while helping it evolve for the future

Kyagulanyi Allan (he/him) Kampala, Uganda

View personal statement

I am Kyagulanyi Allan, a software developer, and co-founder at Grin Mates. Grin Mates is an eco-friendly EVM dApp with an inbuilt crypto wallet that awards Green points for verified sustainable activities. Ii am very excited about the potential of web3 and saddened by some other parts of it.

I am a developer, and I have been lucky to volunteer and also contribute. I worked on diverse projects like AROC and Grin Mates. I volunteered as a Google student developer lead at my university, when i was working at after query experts on project pluto. I used Python to train the LLM mode on bash/linux commands.

My position on key issues is on advancing and advocating for inclusiveness, with priority on children from rural areas.

Nicole Buque (she) Maputo, Mozambique

View personal statement

My name is Nicole Buque, a 20-year-old finalist student in Computer Engineering from Mozambique. I am deeply passionate about data analysis, especially in the context of database systems, and I love transforming information into meaningful insights that drive innovation.

During my academic journey, I have worked with Vodacom, contributing to website development projects that improved digital communication and accessibility. I also participated in the WT Bootcamp for Data Analysis, where I gained strong analytical, technical, and teamwork skills. As an aspiring IT professional, I enjoy exploring how data, systems, and community collaboration can create sustainable solutions. My experience has helped me develop both technical expertise and a people-centered approach to technology - understanding that real progress comes from empowering others through knowledge

Nkonga Morel (he/him) Cameroun

View personal statement

Curious, explorer, calm, patient

My experience on Django is medium

My direction for the DSF is one of growth, mentorship, and openness ,ensuring Django remains a leading framework not just technically, but socially.

Ntui Raoul Ntui Njock (he/his) Buea, Cameroon

View personal statement

I'm a software engineer posionate about AI/ML and solving problems in the healthcare sector in collaboration with others.

I'm a skilled software engineer in the domain of AI/ML, Django, Reactjs, TailwindCSS. I have been building softwares for over 2 years now and growing myself in this space has brought some level of impact in the community as I have been organizing workshops in the university of Buea, teaching people about the Django framework, I also had the privilege to participate at the deep learning indaba Cameroon where I was interviewed by CRTV to share knowledge with respect to deep learning. You could see all these on my LinkedIn profile (Ntui Raoul).

I believe that in collaboration with others at the DSF, I'll help the DSF to improve in it's ways to accomplish its goals. I believe we shall improve on the codebase of Django framework, it's collaboration with other frameworks so as to help the users of the framework to find it more easy to use the Django framework. Also I'll help to expand the Django framework to people across the world.

Priya Pahwa (she/her) India, Asia

View personal statement

I'm Priya Pahwa (she/her), an Indian woman who found both community and confidence through Django. I work as a Software Engineer (Backend and DevOps) at a fintech startup and love volunteering in community spaces. From leading student communities as a GitHub Campus Expert to contributing as a GitHub Octern and supporting initiatives in the Django ecosystem, open-source is an integral part of my journey as a developer.

My belonging to the Django community has been shaped by serving as the Session Organizer of the global Djangonaut Space program, where I work closely with contributors and mentors from diverse geographies, cultures, age groups, and both coding and non-coding backgrounds. Being part of the organizing team for Sessions 3, 4, and ongoing 5, each experience has evolved my approach towards better intentional community stewardship and collaboration.

I also serve as Co-Chair of the DSF Fundraising Working Group since its formation in mid-2024. As we enter the execution phase, we are focused on establishing additional long-term funding streams for the DSF. I intend to continue this work by:

  • Running sustained fundraising campaigns rather than one-off appeals
  • Building corporate sponsorship relationships for major donations
  • Focusing on the funding of the Executive Director for financial resilience

My commitment to a supportive ecosystem guides my work. I am a strong advocate of psychological safety in open-source, a topic I've publicly talked about ("Culture Eats Strategy for Breakfast" at PyCon Greece and DjangoCongress Japan). This belief led me to join the DSF Code of Conduct Working Group because the health of a community is determined not only by who joins, but by who feels able to stay.

If elected to the board, I will focus on:

  • Moving fundraising WG from "effort" to infrastructure (already moving in the direction by forming the DSF prospectus)
  • Initiating conference travel grants to lower barriers and increase participation for active community members
  • Strengthening cross-functional working groups' collaboration to reduce organizational silos
  • Designing inclusive contributor lifecycles to support pauses for caregiving or career breaks
  • Highlighting diverse user stories and clearer "here's how to get involved" community pathways
  • Amplifying DSF's public presence and impact through digital marketing strategies

Quinter Apondi Ochieng (she) Kenya-Kisumu City

View personal statement

my name is Quinter Apondi Ochieng from Kisumu city , i am a web developer from kisumu city , Django has been part of my development professional journey for the past two years , i have contributed to local meetups as a community leader, developed several website one being an e-commerce website , also organized Django Girls kisumu workshop which didn't come to success due to financial constrains, workshop was to take place 1st November but postponed it.

In my current position, i lead small team building Django based applications, i have also volunteered as python kisumu community committee member which i served as a non-profit tech boards driven by passion.The experience have strengthen my skills in collaborations , decision making , long-term project planning and governance.I understand how important it is for the DSF to balance technical progress with sustainability and transparency.

The challenge I can help to negotiate is limited mentorship and unemployment. It has always blown my mind why IT, computer science, and SWE graduates struggle after campus life. In my country, SWE,IT and comp sci courses have final year projects that they pass and that have not been presented to any educational institute. I believe that if those projects are shipped, unemployment will be cut by over 50 %.

Rahul Lakhanpal (he/him) Gurugram, India

View personal statement

I am a software architect working for over 13 years in the field of software development based out of Gurugram, India. For the past 8 years, I have been working 100% remotely, working as an independent contractor under my own company deskmonte

As a kid I was always the one breaking more toys than I played with and was super curious. Coming from a normal family background, we always had a focus on academics. Although I did not break into the top tier colleges, the intent and curiosity to learn more stayed.

As of now, I am happily married with an year old kid.

My skills are primarily Python and Django, have been using the same tech stack since the last decade. Have used it to create beautiful admin interfaces for my clients, have written APIs in both REST using django rest framework package along with GraphQL using django-graphene. Alongside, have almost always integrated Postgres and Celery+Redis with my core tech stack.

In terms of volunteering, I have been an active code mentor at Code Institute, Ireland and have been with them since 2019, helping students pick up code using Python and Django for the most part.

I love the django rest framework and I truly believe that the admin interface is extremely powerful and the utility of the overall offering is huge.

I would love to take django to people who are just starting up, support and promote for more meetups/conferences that can focus on django along with advancing django's utility in the age of AI.

Ryan Cheley (he/him) California, United States

View personal statement

I'm Ryan and I'm running for the DSF Board in the hopes of being the Treasurer. I've been using Django since 2018. After several years of use, I finally had a chance to attend DjangoCon US in 2022. I felt like I finally found a community where I belonged and knew that I wanted to do whatever I could to give back.

My involvement with the community over the last several years includes being a:

If elected to the board, I would bring valuable skills to benefit the community, including:

  • Managing technical teams for nearly 15 years
  • Nearly 20 years of project management experience
  • Overseeing the financial operations for a team of nearly 30
  • Consensus-building on large projects

I'm particularly drawn to the treasurer role because my background in financial management and budgeting positions me to help ensure the DSF's continued financial health and transparency.

For more details on my implementation plan, see my blog post Details on My Candidate Statement for the DSF.

If elected to the DSF Board I have a few key initiatives I'd like to work on:

  1. Getting an Executive Director to help run the day-to-day operations of the DSF
  2. Identifying small to midsized companies for sponsorships
  3. Implementing a formal strategic planning process
  4. Setting up a fiscal sponsorship program to allow support of initiatives like Django Commons

I believe these are achievable in the next 2 years.

Sanyam Khurana (he/him) Toronto, Canada

View personal statement

I'm Sanyam Khurana ("CuriousLearner"), a seasoned Django contributor and member of the djangoproject.com Website Working Group, as well as a CPython bug triager and OSS maintainer. I've worked in India, the U.K., and Canada, and I'm focused on inclusion, dependable tooling, and turning first-time contributors into regulars.

I've contributed to Django and the wider Python ecosystem for years as a maintainer, reviewer, and issue triager. My Django-focused work includes django-phone-verify (auth flows), django-postgres-anonymizer (privacy/data handling), and Django-Keel (a production-ready project template). I also build developer tooling like CacheSniper (a tiny Rust CLI to sanity-check edge caching).

Repos: django-phone-verify , django-postgres-anonymizer , django-keel , cache_sniper

CPython & Django contributions: django commits, djangoproject.com commits, CPython commits

Beyond code, I've supported newcomers through docs-first guidance, small PR reviews, and patient issue triage. I'm a CPython bug triager and listed in Mozilla credits, which taught me to balance openness with careful review and clear process. I've collaborated across India, UK, and Canada, so I'm used to async work, time-zones, and transparent communication.

I owe my learnings to the community and want to give back. I understand the DSF Board is non-technical leadership like fundraising, grants/sponsorships, community programs, CoC support, and stewardship of Django's operations and not deciding framework features. That's exactly where I want to contribute.

I'll push for an easy, skimmable annual "Where your donation went" report (fellows, events, grants, infra) plus lightweight quarterly updates. Clear storytelling helps retain individual and corporate sponsors and shows impact beyond core commits.

I want to grow contributors globally by turning their first PR into regular contributions. I want to make this path smoother by funding micro-grants for mentorship/sprints and backing working groups with small, delegated budgets under clear guardrails - so they can move fast without waiting on the Board.

I propose a ready-to-use "starter kit" for meetups/sprints: budget templates, venue ask letters, CoC, diversity travel-grant boilerplates, and a sponsor prospectus. We should prioritize regions with high Django usage but fewer historic DSF touchpoints (South Asia, Africa, LATAM). This comes directly from organizing over 120 meetups and annual conference like PyCon India for 3 years.

Your move now

That's it, you've read it all 🌈! Be sure to vote if you're eligible, by using the link shared over email. To support the future of Django, donate to the Django Software Foundation on our website or via GitHub Sponsors. We also have our 30% Off PyCharm Pro - 100% for Django 💚.

06 Nov 2025 5:00am GMT

feedDjango community aggregator: Community blog posts

Hitting Limits and Noticing Clues in Graphs

Sometimes the limit you hit when dealing with high traffic on a website isn't the limit that needs to be raised. We encountered this recently on a site we're helping to maintain and upgrade. The site has been around since the very early days of Django. It was built back in the days when Apache with mod_wsgi (or even mod_python!) was one of the more common Django deployment environments.

06 Nov 2025 4:53am GMT

Cursor vs. Claude for Django Development

This article looks at how Cursor and Claude compare when developing a Django application.

06 Nov 2025 4:28am GMT

feedPlanet Python

Seth Michael Larson: Ice Pikmin and difficulty of Pikmin Bloom event decor sets

I play Pikmin Bloom regularly with a group of friends. The game can be best described as "Pokémon Go, but walking". One of the main goals of the game is to collect "decor Pikmin" which can come from the environment, landmarks, and businesses that you walk by. Recently there's been a change to the game that makes completing sets of decor Pikmin significantly more difficult, this post explores the new difficulty increase.

Every month there are special decor Pikmin which are earned by completing challenges like walking, growing Pikmin, or planting flowers. The type of Pikmin you receive is randomized between the available types, and you can only complete the set by collecting one of each available Pikmin type. You can only receive these special decors during the specific month, after the month is over you have to wait a calendar year before you can continue collecting seedlings.

Just a few days ago there were 7 Pikmin types corresponding to the first three mainline Pikmin games: Red, Yellow, Blue, White, Purple, Rock, and Wing Pikmin. With the most recent update another Pikmin type has been added: Ice Pikmin from Pikmin 4. This means that going forward there will be 8 Pikmin types per event decor set instead of only 7.

So, what does that mean for the difficulty of the game? I could probably do some math here, but it'd be much easier to simulate how many event Pikmin you'd need to receive before completing the set depending on if there are 7 or 8 total Pikmin types.

Running this Python simulation script creates the below table which shows the difference in cumulative probability for completing a decor set after receiving a number of Pikmin seedlings. For example, if you have grown 10 seedlings you'd have a 10.5% chance of completing the decor set before Ice Pikmin and only a 2.8% chance of completing the decor set after Ice Pikmin.

# Seedlings Before After Diff
7 0.6% 0.0% -0.6%
8 2.4% 0.3% -2.2%
9 5.8% 1.1% -4.7%
10 10.5% 2.8% -7.7%
11 16.3% 5.6% -10.7%
12 22.8% 9.4% -13.5%
13 29.7% 13.9% -15.8%
14 36.6% 19.2% -17.4%
15 43.3% 24.9% -18.5%
16 49.7% 30.7% -19.0%
17 55.7% 36.5% -19.1%
18 61.1% 42.3% -18.8%
19 66.0% 47.9% -18.1%
20 70.4% 53.1% -17.3%
21 74.3% 57.9% -16.4%
22 77.7% 62.4% -15.3%
23 80.7% 66.5% -14.2%
24 83.3% 70.3% -13.1%
25 85.6% 73.6% -12.0%
26 87.6% 76.7% -10.9%
27 89.3% 79.4% -9.9%
28 90.8% 81.8% -9.0%
29 92.1% 84.0% -8.1%
30 93.2% 85.9% -7.3%
31 94.2% 87.6% -6.5%
32 95.0% 89.1% -5.9%
33 95.7% 90.4% -5.3%
34 96.3% 91.6% -4.7%
35 96.8% 92.6% -4.2%
36 97.3% 93.6% -3.7%
37 97.7% 94.4% -3.3%
38 98.0% 95.1% -2.9%
39 98.3% 95.7% -2.6%
40 98.5% 96.2% -2.3%
41 98.7% 96.7% -2.1%
42 98.9% 97.1% -1.8%
43 99.1% 97.5% -1.6%
44 99.2% 97.8% -1.4%
45 99.3% 98.0% -1.3%
46 99.4% 98.3% -1.1%
47 99.5% 98.5% -1.0%
48 99.6% 98.7% -0.9%
49 99.6% 98.9% -0.8%
50 99.7% 99.0% -0.7%

For mid-range numbers of Pikmin seedlings (13-22) you'll be at least 15% less likely to have completed the Pikmin decor set for that number of seedlings. To have a 95% chance of completing a decor set you'd need to gather 32 seedlings prior to Ice Pikmin, with Ice Pikmin you'd need to collect 38 seedlings.

I don't know how many event Pikmin seedlings I receive in a typical month, I'll be watching that number and see if I'm able to complete the set. Good luck out there, Pikmin players! 😬



Thanks for keeping RSS alive! ♥

06 Nov 2025 12:00am GMT

05 Nov 2025

feedPlanet Python

TestDriven.io: Cursor vs. Claude for Django Development

This article looks at how Cursor and Claude compare when developing a Django application.

05 Nov 2025 10:28pm GMT

feedDjango community aggregator: Community blog posts

Thoughts about Django-based content management systems

Thoughts about Django-based content management systems

I have almost exclusively used Django for implementing content management systems (and other backends) since 2008.

In this time, content management systems have come and gone. The big three systems many years back were django CMS, Mezzanine and our own FeinCMS.

During all this time I have always kept an eye open for other CMS than our own but have steadily continued working in my small corner of the Django space. I think it's time to write down why I have been doing this all this time, for myself and possibly also for other interested parties.

Why not use Wagtail, django CMS or any of those alternatives?

Let's start with the big one. Why not use Wagtail?

The Django administration interface is actually great. Even though some people say that it should be treated as a tool for developers only, recent improvements to the accessibility and the general usability suggest otherwise. I have written more about my views on this in The Django admin is a CMS. Using and building on top of the Django admin is a great way to immediately profit from all current and future improvements without having to reimplement anything.

I don't want to have to reimplement Django's features, I want to add what I need on top.

Faster updates

Everyone implementing and maintaining other CMS is doing a great job and I don't want to throw any shade. I still feel that it's important to point out that systems can make it hard to adopt new Django versions on release day:

These larger systems have many more (very talented) people working on them. I'm not saying I'm doing a better job. I'm only pointing out that I'm following a different philosophy where I'm conservative about running code in production and I'd rather have less features when the price is a lot of maintenance later. I'm always thinking about long term maintenance. I really don't want to maintain some of these larger projects, or even parts of them. So I'd rather not adopt them for projects which hopefully will be developed and maintained for a long time to come. By the way: This experience has been earned the hard way.

The rule of least power

From Wikipedia:

In programming, the rule of least power is a design principle that "suggests choosing the least powerful [computer] language suitable for a given purpose". Stated alternatively, given a choice among computer languages, classes of which range from descriptive (or declarative) to procedural, the less procedural, more descriptive the language one chooses, the more one can do with the data stored in that language.

Django itself already provides lots and lots of power. I'd argue that a very powerful platform on top of Django may be too much of a good thing. I'd rather keep it simple and stupid.

Editing heterogenous collections of content

Django admin's inlines are great, but they are not sufficient for building a CMS. You need something to manage different types. django-content-editor does that and has done that since 2009.

When Wagtail introduced the StreamField in 2015 it was definitely a great update to an already great CMS but it wasn't a new idea generally and not a new thing in Django land. They didn't say it was and welcomed the fact that they also started using a better way to structure content.

Structured content is great. Putting everything into one large rich text area isn't what I want. Django's ORM and admin interface are great for actually modelling the data in a reusable way. And when you need more flexibility than what's offered by Django's forms, the community offers many projects extending the admin. These days, I really like working with the django-json-schema-editor component; I even reference other model instances in the database and let the JSON editor handle the referential integrity transparently for me (so that referenced model instances do not silently disappear).

More reading

The future of FeinCMS and the feincms category may be interesting. Also, I'd love to talk about these thoughts, either by email or on Mastodon.

05 Nov 2025 6:00pm GMT

15 Oct 2025

feedPlanet Plone - Where Developers And Integrators Write

Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.

There are several challenges when doing Plone migrations:

  • Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
  • Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
  • Migrating Classic UI to Volto
  • Also, you might be migrating from a completely different CMS to Plone.

How do we do migrations in Plone in general?

  • In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
  • Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.

Let's look at export/import, which has three parts:

  • Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
  • Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
  • Load: Transmogrifier, collective.exportimport, plone.exportimport.

Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.

collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.

Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.

Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.

collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.

15 Oct 2025 3:44pm GMT

Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.

With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.

First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.

Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.

What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.

Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.

For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.

Future improvements:

  • Start the docker containers and curl/wget the /ok endpoint.
  • lock files for the backend, with pip/uv.

15 Oct 2025 3:41pm GMT

Maurits van Rees: David Glick: State of plone.restapi

[Missed the first part.]

Vision: plone.restapi aims to provide a complete, stable, documented, extensible, language-agnostic API for the Plone CMS.

New services

  • @site: global site settings. These are overall, public settings that are needed on all pages and that don't change per context.
  • @login: choose between multiple login provider.
  • @navroot: contextual data from the navigation root of the current context.
  • @inherit: contextual data from any behavior. It looks for the closest parent that has this behavior defined, and gets this data.

Dynamic teaser blocks: you can choose to customize the teaser content. So the teaser links to the item you have selected, but if you want, you can change the title and other fields.

Roadmap:

  • Don't break it.
  • 10.0 release for Plone 6.2: remove setuptools namespace.
  • Continue to support migration path from older versions: use an old plone.restapi version on an old Plone version to export it, and being able to import this to the latest versions.
  • Recycle bin (work in progress): a lot of the work from Rohan is in Classic UI, but he is working on the restapi as well.

Wishlist, no one is working on this, but would be good to have:

  • @permissions endpoint
  • @catalog endpoint
  • missing control panel
  • folder type constraints
  • Any time that you find yourself going to the Classic UI to do something, that is a sign something is missing.
  • Some changes to relative paths to fix some use cases
  • Machine readable specifications for OpenAPI, MCP
  • New forms backend
  • Bulk operations
  • Streaming API
  • External functional test suite, that you could also run against e.g. guillotina or Nick to see if it works there as well.
  • Time travel: be able to see the state of the database from some time ago. The ZODB has some options here.

15 Oct 2025 3:39pm GMT

15 Aug 2025

feedPlanet Twisted

Glyph Lefkowitz: The Futzing Fraction

The most optimistic vision of generative AI1 is that it will relieve us of the tedious, repetitive elements of knowledge work so that we can get to work on the really interesting problems that such tedium stands in the way of. Even if you fully believe in this vision, it's hard to deny that today, some tedium is associated with the process of using generative AI itself.

Generative AI also isn't free, and so, as responsible consumers, we need to ask: is it worth it? What's the ROI of genAI, and how can we tell? In this post, I'd like to explore a logical framework for evaluating genAI expenditures, to determine if your organization is getting its money's worth.

Perpetually Proffering Permuted Prompts

I think most LLM users would agree with me that a typical workflow with an LLM rarely involves prompting it only one time and getting a perfectly useful answer that solves the whole problem.

Generative AI best practices, even from the most optimistic vendors all suggest that you should continuously evaluate everything. ChatGPT, which is really the only genAI product with significantly scaled adoption, still says at the bottom of every interaction:

ChatGPT can make mistakes. Check important info.

If we have to "check important info" on every interaction, it stands to reason that even if we think it's useful, some of those checks will find an error. Again, if we think it's useful, presumably the next thing to do is to perturb our prompt somehow, and issue it again, in the hopes that the next invocation will, by dint of either:

  1. better luck this time with the stochastic aspect of the inference process,
  2. enhanced application of our skill to engineer a better prompt based on the deficiencies of the current inference, or
  3. better performance of the model by populating additional context in subsequent chained prompts.

Unfortunately, given the relative lack of reliable methods to re-generate the prompt and receive a better answer2, checking the output and re-prompting the model can feel like just kinda futzing around with it. You try, you get a wrong answer, you try a few more times, eventually you get the right answer that you wanted in the first place. It's a somewhat unsatisfying process, but if you get the right answer eventually, it does feel like progress, and you didn't need to use up another human's time.

In fact, the hottest buzzword of the last hype cycle is "agentic". While I have my own feelings about this particular word3, its current practical definition is "a generative AI system which automates the process of re-prompting itself, by having a deterministic program evaluate its outputs for correctness".

A better term for an "agentic" system would be a "self-futzing system".

However, the ability to automate some level of checking and re-prompting does not mean that you can fully delegate tasks to an agentic tool, either. It is, plainly put, not safe. If you leave the AI on its own, you will get terrible results that will at best make for a funny story45 and at worst might end up causing serious damage67.

Taken together, this all means that for any consequential task that you want to accomplish with genAI, you need an expert human in the loop. The human must be capable of independently doing the job that the genAI system is being asked to accomplish.

When the genAI guesses correctly and produces usable output, some of the human's time will be saved. When the genAI guesses wrong and produces hallucinatory gibberish or even "correct" output that nevertheless fails to account for some unstated but necessary property such as security or scale, some of the human's time will be wasted evaluating it and re-trying it.

Income from Investment in Inference

Let's evaluate an abstract, hypothetical genAI system that can automate some work for our organization. To avoid implicating any specific vendor, let's call the system "Mallory".

Is Mallory worth the money? How can we know?

Logically, there are only two outcomes that might result from using Mallory to do our work.

  1. We prompt Mallory to do some work; we check its work, it is correct, and some time is saved.
  2. We prompt Mallory to do some work; we check its work, it fails, and we futz around with the result; this time is wasted.

As a logical framework, this makes sense, but ROI is an arithmetical concept, not a logical one. So let's translate this into some terms.

In order to evaluate Mallory, let's define the Futzing Fraction, " FF ", in terms of the following variables:

H

the average amount of time a Human worker would take to do a task, unaided by Mallory

I

the amount of time that Mallory takes to run one Inference8

C

the amount of time that a human has to spend Checking Mallory's output for each inference

P

the Probability that Mallory will produce a correct inference for each prompt

W

the average amount of time that it takes for a human to Write one prompt for Mallory

E

since we are normalizing everything to time, rather than money, we do also have to account for the dollar of Mallory as as a product, so we will include the Equivalent amount of human time we could purchase for the marginal cost of one9 inference.

As in last week's example of simple ROI arithmetic, we will put our costs in the numerator, and our benefits in the denominator.

FF = W+I+C+E P H

The idea here is that for each prompt, the minimum amount of time-equivalent cost possible is W+I+C+E. The user must, at least once, write a prompt, wait for inference to run, then check the output; and, of course, pay any costs to Mallory's vendor.

If the probability of a correct answer is P=13, then they will do this entire process 3 times10, so we put P in the denominator. Finally, we divide everything by H, because we are trying to determine if we are actually saving any time or money, versus just letting our existing human, who has to be driving this process anyway, do the whole thing.

If the Futzing Fraction evaluates to a number greater than 1, as previously discussed, you are a bozo; you're spending more time futzing with Mallory than getting value out of it.

Figuring out the Fraction is Frustrating

In order to even evaluate the value of the Futzing Fraction though, you have to have a sound method to even get a vague sense of all the terms.

If you are a business leader, a lot of this is relatively easy to measure. You vaguely know what H is, because you know what your payroll costs, and similarly, you can figure out E with some pretty trivial arithmetic based on Mallory's pricing table. There are endless YouTube channels, spec sheets and benchmarks to give you I. W is probably going to be so small compared to H that it hardly merits consideration11.

But, are you measuring C? If your employees are not checking the outputs of the AI, you're on a path to catastrophe that no ROI calculation can capture, so it had better be greater than zero.

Are you measuring P? How often does the AI get it right on the first try?

Challenges to Computing Checking Costs

In the fraction defined above, the term C is going to be large. Larger than you think.

Measuring P and C with a high degree of precision is probably going to be very hard; possibly unreasonably so, or too expensive12 to bother with in practice. So you will undoubtedly need to work with estimates and proxy metrics. But you have to be aware that this is a problem domain where your normal method of estimating is going to be extremely vulnerable to inherent cognitive bias, and find ways to measure.

Margins, Money, and Metacognition

First let's discuss cognitive and metacognitive bias.

My favorite cognitive bias is the availability heuristic and a close second is its cousin salience bias. Humans are empirically predisposed towards noticing and remembering things that are more striking, and to overestimate their frequency.

If you are estimating the variables above based on the vibe that you're getting from the experience of using an LLM, you may be overestimating its utility.

Consider a slot machine.

If you put a dollar in to a slot machine, and you lose that dollar, this is an unremarkable event. Expected, even. It doesn't seem interesting. You can repeat this over and over again, a thousand times, and each time it will seem equally unremarkable. If you do it a thousand times, you will probably get gradually more anxious as your sense of your dwindling bank account becomes slowly more salient, but losing one more dollar still seems unremarkable.

If you put a dollar in a slot machine and it gives you a thousand dollars, that will probably seem pretty cool. Interesting. Memorable. You might tell a story about this happening, but you definitely wouldn't really remember any particular time you lost one dollar.

Luckily, when you arrive at a casino with slot machines, you probably know well enough to set a hard budget in the form of some amount of physical currency you will have available to you. The odds are against you, you'll probably lose it all, but any responsible gambler will have an immediate, physical representation of their balance in front of them, so when they have lost it all, they can see that their hands are empty, and can try to resist the "just one more pull" temptation, after hitting that limit.

Now, consider Mallory.

If you put ten minutes into writing a prompt, and Mallory gives a completely off-the-rails, useless answer, and you lose ten minutes, well, that's just what using a computer is like sometimes. Mallory malfunctioned, or hallucinated, but it does that sometimes, everybody knows that. You only wasted ten minutes. It's fine. Not a big deal. Let's try it a few more times. Just ten more minutes. It'll probably work this time.

If you put ten minutes into writing a prompt, and it completes a task that would have otherwise taken you 4 hours, that feels amazing. Like the computer is magic! An absolute endorphin rush.

Very memorable. When it happens, it feels like P=1.

But... did you have a time budget before you started? Did you have a specified N such that "I will give up on Mallory as soon as I have spent N minutes attempting to solve this problem with it"? When the jackpot finally pays out that 4 hours, did you notice that you put 6 hours worth of 10-minute prompt coins into it in?

If you are attempting to use the same sort of heuristic intuition that probably works pretty well for other business leadership decisions, Mallory's slot-machine chat-prompt user interface is practically designed to subvert those sensibilities. Most business activities do not have nearly such an emotionally variable, intermittent reward schedule. They're not going to trick you with this sort of cognitive illusion.

Thus far we have been talking about cognitive bias, but there is a metacognitive bias at play too: while Dunning-Kruger, everybody's favorite metacognitive bias does have some problems with it, the main underlying metacognitive bias is that we tend to believe our own thoughts and perceptions, and it requires active effort to distance ourselves from them, even if we know they might be wrong.

This means you must assume any intuitive estimate of C is going to be biased low; similarly P is going to be biased high. You will forget the time you spent checking, and you will underestimate the number of times you had to re-check.

To avoid this, you will need to decide on a Ulysses pact to provide some inputs to a calculation for these factors that you will not be able to able to fudge if they seem wrong to you.

Problematically Plausible Presentation

Another nasty little cognitive-bias landmine for you to watch out for is the authority bias, for two reasons:

  1. People will tend to see Mallory as an unbiased, external authority, and thereby see it as more of an authority than a similarly-situated human13.
  2. Being an LLM, Mallory will be overconfident in its answers14.

The nature of LLM training is also such that commonly co-occurring tokens in the training corpus produce higher likelihood of co-occurring in the output; they're just going to be closer together in the vector-space of the weights; that's, like, what training a model is, establishing those relationships.

If you've ever used an heuristic to informally evaluate someone's credibility by listening for industry-specific shibboleths or ways of describing a particular issue, that skill is now useless. Having ingested every industry's expert literature, commonly-occurring phrases will always be present in Mallory's output. Mallory will usually sound like an expert, but then make mistakes at random.15.

While you might intuitively estimate C by thinking "well, if I asked a person, how could I check that they were correct, and how long would that take?" that estimate will be extremely optimistic, because the heuristic techniques you would use to quickly evaluate incorrect information from other humans will fail with Mallory. You need to go all the way back to primary sources and actually fully verify the output every time, or you will likely fall into one of these traps.

Mallory Mangling Mentorship

So far, I've been describing the effect Mallory will have in the context of an individual attempting to get some work done. If we are considering organization-wide adoption of Mallory, however, we must also consider the impact on team dynamics. There are a number of possible potential side effects that one might consider when looking at, but here I will focus on just one that I have observed.

I have a cohort of friends in the software industry, most of whom are individual contributors. I'm a programmer who likes programming, so are most of my friends, and we are also (sigh), charitably, pretty solidly middle-aged at this point, so we tend to have a lot of experience.

As such, we are often the folks that the team - or, in my case, the community - goes to when less-experienced folks need answers.

On its own, this is actually pretty great. Answering questions from more junior folks is one of the best parts of a software development job. It's an opportunity to be helpful, mostly just by knowing a thing we already knew. And it's an opportunity to help someone else improve their own agency by giving them knowledge that they can use in the future.

However, generative AI throws a bit of a wrench into the mix.

Let's imagine a scenario where we have 2 developers: Alice, a staff engineer who has a good understanding of the system being built, and Bob, a relatively junior engineer who is still onboarding.

The traditional interaction between Alice and Bob, when Bob has a question, goes like this:

  1. Bob gets confused about something in the system being developed, because Bob's understanding of the system is incorrect.
  2. Bob formulates a question based on this confusion.
  3. Bob asks Alice that question.
  4. Alice knows the system, so she gives an answer which accurately reflects the state of the system to Bob.
  5. Bob's understanding of the system improves, and thus he will have fewer and better-informed questions going forward.

You can imagine how repeating this simple 5-step process will eventually transform Bob into a senior developer, and then he can start answering questions on his own. Making sufficient time for regularly iterating this loop is the heart of any good mentorship process.

Now, though, with Mallory in the mix, the process now has a new decision point, changing it from a linear sequence to a flow chart.

We begin the same way, with steps 1 and 2. Bob's confused, Bob formulates a question, but then:

  1. Bob asks Mallory that question.

Here, our path then diverges into a "happy" path, a "meh" path, and a "sad" path.

The "happy" path proceeds like so:

  1. Mallory happens to formulate a correct answer.
  2. Bob's understanding of the system improves, and thus he will have fewer and better-informed questions going forward.

Great. Problem solved. We just saved some of Alice's time. But as we learned earlier,

Mallory can make mistakes. When that happens, we will need to check important info. So let's get checking:

  1. Mallory happens to formulate an incorrect answer.
  2. Bob investigates this answer.
  3. Bob realizes that this answer is incorrect because it is inconsistent with some of his prior, correct knowledge of the system, or his investigation.
  4. Bob asks Alice the same question; GOTO traditional interaction step 4.

On this path, Bob spent a while futzing around with Mallory, to no particular benefit. This wastes some of Bob's time, but then again, Bob could have ended up on the happy path, so perhaps it was worth the risk; at least Bob wasn't wasting any of Alice's much more valuable time in the process.16

Notice that beginning at the start of step 4, we must begin allocating all of Bob's time to C, so C already starts getting a bit bigger than if it were just Bob checking Mallory's output specifically on tasks that Bob is doing.

That brings us to the "sad" path.

  1. Mallory happens to formulate an incorrect answer.
  2. Bob investigates this answer.
  3. Bob does not realize that this answer is incorrect because he is unable to recognize any inconsistencies with his existing, incomplete knowledge of the system.
  4. Bob integrates Mallory's incorrect information of the system into his mental model.
  5. Bob proceeds to make a larger and larger mess of his work, based on an incorrect mental model.
  6. Eventually, Bob asks Alice a new, worse question, based on this incorrect understanding.
  7. Sadly we cannot return to the happy path at this point, because now Alice must unravel the complex series of confusing misunderstandings that Mallory has unfortunately conveyed to Bob at this point. In the really sad case, Bob actually doesn't believe Alice for a while, because Mallory seems unbiased17, and Alice has to waste even more time convincing Bob before she can simply explain to him.

Now, we have wasted some of Bob's time, and some of Alice's time. Everything from step 5-10 is C, and as soon as Alice gets involved, we are now adding to C at double real-time. If more team members are pulled in to the investigation, you are now multiplying C by the number of investigators, potentially running at triple or quadruple real time.

But That's Not All

Here I've presented a brief selection reasons why C will be both large, and larger than you expect. To review:

  1. Gambling-style mechanics of the user interface will interfere with your own self-monitoring and developing a good estimate.
  2. You can't use human heuristics for quickly spotting bad answers.
  3. Wrong answers given to junior people who can't evaluate them will waste more time from your more senior employees.

But this is a small selection of ways that Mallory's output can cost you money and time. It's harder to simplistically model second-order effects like this, but there's also a broad range of possibilities for ways that, rather than simply checking and catching errors, an error slips through and starts doing damage. Or ways in which the output isn't exactly wrong, but still sub-optimal in ways which can be difficult to notice in the short term.

For example, you might successfully vibe-code your way to launch a series of applications, successfully "checking" the output along the way, but then discover that the resulting code is unmaintainable garbage that prevents future feature delivery, and needs to be re-written18. But this kind of intellectual debt isn't even specific to technical debt while coding; it can even affect such apparently genAI-amenable fields as LinkedIn content marketing19.

Problems with the Prediction of P

C isn't the only challenging term though. P, is just as, if not more important, and just as hard to measure.

LLM marketing materials love to phrase their accuracy in terms of a percentage. Accuracy claims for LLMs in general tend to hover around 70%20. But these scores vary per field, and when you aggregate them across multiple topic areas, they start to trend down. This is exactly why "agentic" approaches for more immediately-verifiable LLM outputs (with checks like "did the code work") got popular in the first place: you need to try more than once.

Independently measured claims about accuracy tend to be quite a bit lower21. The field of AI benchmarks is exploding, but it probably goes without saying that LLM vendors game those benchmarks22, because of course every incentive would encourage them to do that. Regardless of what their arbitrary scoring on some benchmark might say, all that matters to your business is whether it is accurate for the problems you are solving, for the way that you use it. Which is not necessarily going to correspond to any benchmark. You will need to measure it for yourself.

With that goal in mind, our formulation of P must be a somewhat harsher standard than "accuracy". It's not merely "was the factual information contained in any generated output accurate", but, "is the output good enough that some given real knowledge-work task is done and the human does not need to issue another prompt"?

Surprisingly Small Space for Slip-Ups

The problem with reporting these things as percentages at all, however, is that our actual definition for P is 1attempts, where attempts for any given attempt, at least, must be an integer greater than or equal to 1.

Taken in aggregate, if we succeed on the first prompt more often than not, we could end up with a P>12, but combined with the previous observation that you almost always have to prompt it more than once, the practical reality is that P will start at 50% and go down from there.

If we plug in some numbers, trying to be as extremely optimistic as we can, and say that we have a uniform stream of tasks, every one of which can be addressed by Mallory, every one of which:

Thought experiments are a dicey basis for reasoning in the face of disagreements, so I have tried to formulate something here that is absolutely, comically, over-the-top stacked in favor of the AI optimist here.

Would that be a profitable? It sure seems like it, given that we are trading off 45 minutes of human time for 1 minute of Mallory-time and 10 minutes of human time. If we ask Python:

1
2
3
4
5
>>> def FF(H, I, C, P, W, E):
...     return (W + I + C + E) / (P * H)
... FF(H=45.0, I=1.0, C=5.0, P=1/2, W=5.0, E=0.01)
...
0.48933333333333334

We get a futzing fraction of about 0.4896. Not bad! Sounds like, at least under these conditions, it would indeed be cost-effective to deploy Mallory. But… realistically, do you reliably get useful, done-with-the-task quality output on the second prompt? Let's bump up the denominator on P just a little bit there, and see how we fare:

1
2
>>> FF(H=45.0, I=1.0, C=5.0, P=1/3, W=5.0, E=0.01)
0.734

Oof. Still cost-effective at 0.734, but not quite as good. Where do we cap out, exactly?

1
2
3
4
5
6
7
8
9
>>> from itertools import count
... for A in count(start=4):
...     print(A, result := FF(H=45.0, I=1.0, C=5.0, P=1 / A, W=5.0, E=1/60.))
...     if result > 1:
...         break
...
4 0.9792592592592594
5 1.224074074074074
>>>

With this little test, we can see that at our next iteration we are already at 0.9792, and by 5 tries per prompt, even in this absolute fever-dream of an over-optimistic scenario, with a futzing fraction of 1.2240, Mallory is now a net detriment to our bottom line.

Harm to the Humans

We are treating H as functionally constant so far, an average around some hypothetical Gaussian distribution, but the distribution itself can also change over time.

Formally speaking, an increase to H would be good for our fraction. Maybe it would even be a good thing; it could mean we're taking on harder and harder tasks due to the superpowers that Mallory has given us.

But an observed increase to H would probably not be good. An increase could also mean your humans are getting worse at solving problems, because using Mallory has atrophied their skills23 and sabotaged learning opportunities2425. It could also go up because your senior, experienced people now hate their jobs26.

For some more vulnerable folks, Mallory might just take a shortcut to all these complex interactions and drive them completely insane27 directly. Employees experiencing an intense psychotic episode are famously less productive than those who are not.

This could all be very bad, if our futzing fraction eventually does head north of 1 and you need to reconsider introducing human-only workflows, without Mallory.

Abridging the Artificial Arithmetic (Alliteratively)

To reiterate, I have proposed this fraction:

FF = W+I+C+E P H

which shows us positive ROI when FF is less than 1, and negative ROI when it is more than 1.

This model is heavily simplified. A comprehensive measurement program that tests the efficacy of any technology, let alone one as complex and rapidly changing as LLMs, is more complex than could be captured in a single blog post.

Real-world work might be insufficiently uniform to fit into a closed-form solution like this. Perhaps an iterated simulation with variables based on the range of values seem from your team's metrics would give better results.

However, in this post, I want to illustrate that if you are going to try to evaluate an LLM-based tool, you need to at least include some representation of each of these terms somewhere. They are all fundamental to the way the technology works, and if you're not measuring them somehow, then you are flying blind into the genAI storm.

I also hope to show that a lot of existing assumptions about how benefits might be demonstrated, for example with user surveys about general impressions, or by evaluating artificial benchmark scores, are deeply flawed.

Even making what I consider to be wildly, unrealistically optimistic assumptions about these measurements, I hope I've shown:

  1. in the numerator, C might be a lot higher than you expect,
  2. in the denominator, P might be a lot lower than you expect,
  3. repeated use of an LLM might make H go up, but despite the fact that it's in the denominator, that will ultimately be quite bad for your business.

Personally, I don't have all that many concerns about E and I. E is still seeing significant loss-leader pricing, and I might not be coming down as fast as vendors would like us to believe, if the other numbers work out I don't think they make a huge difference. However, there might still be surprises lurking in there, and if you want to rationally evaluate the effectiveness of a model, you need to be able to measure them and incorporate them as well.

In particular, I really want to stress the importance of the influence of LLMs on your team dynamic, as that can cause massive, hidden increases to C. LLMs present opportunities for junior employees to generate an endless stream of chaff that will simultaneously:

If you've already deployed LLM tooling without measuring these things and without updating your performance management processes to account for the strange distortions that these tools make possible, your Futzing Fraction may be much, much greater than 1, creating hidden costs and technical debt that your organization will not notice until a lot of damage has already been done.

If you got all the way here, particularly if you're someone who is enthusiastic about these technologies, thank you for reading. I appreciate your attention and I am hopeful that if we can start paying attention to these details, perhaps we can all stop futzing around so much with this stuff and get back to doing real work.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. I do not share this optimism, but I want to try very hard in this particular piece to take it as a given that genAI is in fact helpful.

  2. If we could have a better prompt on demand via some repeatable and automatable process, surely we would have used a prompt that got the answer we wanted in the first place.

  3. The software idea of a "user agent" straightforwardly comes from the legal principle of an agent, which has deep roots in common law, jurisprudence, philosophy, and math. When we think of an agent (some software) acting on behalf of a principal (a human user), this historical baggage imputes some important ethical obligations to the developer of the agent software. genAI vendors have been as eager as any software vendor to dodge responsibility for faithfully representing the user's interests even as there are some indications that at least some courts are not persuaded by this dodge, at least by the consumers of genAI attempting to pass on the responsibility all the way to end users. Perhaps it goes without saying, but I'll say it anyway: I don't like this newer interpretation of "agent".

  4. "Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents", Axel Backlund, Lukas Petersson, Feb 20, 2025

  5. "random thing are happening, maxed out usage on api keys", @leojr94 on Twitter, Mar 17, 2025

  6. "New study sheds light on ChatGPT's alarming interactions with teens"

  7. "Lawyers submitted bogus case law created by ChatGPT. A judge fined them $5,000", by Larry Neumeister for the Associated Press, June 22, 2023

  8. During which a human will be busy-waiting on an answer.

  9. Given the fluctuating pricing of these products, and fixed subscription overhead, this will obviously need to be amortized; including all the additional terms to actually convert this from your inputs is left as an exercise for the reader.

  10. I feel like I should emphasize explicitly here that everything is an average over repeated interactions. For example, you might observe that a particular LLM has a low probability of outputting acceptable work on the first prompt, but higher probability on subsequent prompts in the same context, such that it usually takes 4 prompts. For the purposes of this extremely simple closed-form model, we'd still consider that a P of 25%, even though a more sophisticated model, or a monte carlo simulation that sets progressive bounds on the probability, might produce more accurate values.

  11. No it isn't, actually, but for the sake of argument let's grant that it is.

  12. It's worth noting that all this expensive measuring itself must be included in C until you have a solid grounding for all your metrics, but let's optimistically leave all of that out for the sake of simplicity.

  13. "AI Company Poll Finds 45% of Workers Trust the Tech More Than Their Peers", by Suzanne Blake for Newsweek, Aug 13, 2025

  14. AI Chatbots Remain Overconfident - Even When They're Wrong by Jason Bittel for the Dietrich College of Humanities and Social Sciences at Carnegie Mellon University, July 22, 2025

  15. AI Mistakes Are Very Different From Human Mistakes by Bruce Schneier and Nathan E. Sanders for IEEE Spectrum, Jan 13, 2025

  16. Foreshadowing is a narrative device in which a storyteller gives an advance hint of an upcoming event later in the story.

  17. "People are worried about the misuse of AI, but they trust it more than humans"

  18. "Why I stopped using AI (as a Senior Software Engineer)", theSeniorDev YouTube channel, Jun 17, 2025

  19. "I was an AI evangelist. Now I'm an AI vegan. Here's why.", Joe McKay for the greatchatlinkedin YouTube channel, Aug 8, 2025

  20. "What LLM is The Most Accurate?"

  21. "Study Finds That 52 Percent Of ChatGPT Answers to Programming Questions are Wrong", by Sharon Adarlo for Futurism, May 23, 2024

  22. "Off the Mark: The Pitfalls of Metrics Gaming in AI Progress Races", by Tabrez Syed on BoxCars AI, Dec 14, 2023

  23. "I tried coding with AI, I became lazy and stupid", by Thomasorus, Aug 8, 2025

  24. "How AI Changes Student Thinking: The Hidden Cognitive Risks" by Timothy Cook for Psychology Today, May 10, 2025

  25. "Increased AI use linked to eroding critical thinking skills" by Justin Jackson for Phys.org, Jan 13, 2025

  26. "AI could end my job - Just not the way I expected" by Manuel Artero Anguita on dev.to, Jan 27, 2025

  27. "The Emerging Problem of "AI Psychosis"" by Gary Drevitch for Psychology Today, July 21, 2025.

15 Aug 2025 7:51am GMT

09 Aug 2025

feedPlanet Twisted

Glyph Lefkowitz: R0ML’s Ratio

My father, also known as "R0ML" once described a methodology for evaluating volume purchases that I think needs to be more popular.

If you are a hardcore fan, you might know that he has already described this concept publicly in a talk at OSCON in 2005, among other places, but it has never found its way to the public Internet, so I'm giving it a home here, and in the process, appropriating some of his words.1


Let's say you're running a circus. The circus has many clowns. Ten thousand clowns, to be precise. They require bright red clown noses. Therefore, you must acquire a significant volume of clown noses. An enterprise licensing agreement for clown noses, if you will.

If the nose plays, it can really make the act. In order to make sure you're getting quality noses, you go with a quality vendor. You select a vendor who can supply noses for $100 each, at retail.

Do you want to buy retail? Ten thousand clowns, ten thousand noses, one hundred dollars: that's a million bucks worth of noses, so it's worth your while to get a good deal.

As a conscientious executive, you go to the golf course with your favorite clown accessories vendor and negotiate yourself a 50% discount, with a commitment to buy all ten thousand noses.

Is this a good deal? Should you take it?

To determine this, we will use an analytical tool called R0ML's Ratio (RR).

The ratio has 2 terms:

  1. the Full Undiscounted Retail List Price of Units Used (FURLPoUU), which can of course be computed by the individual retail list price of a single unit (in our case, $100) multiplied by the number of units used
  2. the Total Price of the Entire Enterprise Volume Licensing Agreement (TPotEEVLA), which in our case is $500,000.

It is expressed as:

RR = TPotEEVLA FURLPoUU

Crucially, you must be able to compute the number of units used in order to complete this ratio. If, as expected, every single clown wears their nose at least once during the period of the license agreement, then our Units Used is 10,000, our FURLPoUU is $1,000,000 and our TPotEEVLA is $500,000, which makes our RR 0.5.

Congratulations. If R0ML's Ratio is less than 1, it's a good deal. Proceed.

But… maybe the nose doesn't play. Not every clown's costume is an exact clone of the traditional, stereotypical image of a clown. Many are avant-garde. Perhaps this plentiful proboscis pledge was premature. Here, I must quote the originator of this theoretical framework directly:

What if the wheeze doesn't please?

What if the schnozz gives some pause?

In other words: what if some clowns don't wear their noses?

If we were to do this deal, and then ask around afterwards to find out that only 200 of our 10,000 clowns were to use their noses, then FURLPoUU comes out to 200 * $100, for a total of $20,000. In that scenario, RR is 25, which you may observe is substantially greater than 1.

If you do a deal where R0ML's ratio is greater than 1, then you are the bozo.


I apologize if I have belabored this point. As R0ML expressed in the email we exchanged about this many years ago,

I do not mind if you blog about it - and I don't mind getting the credit - although one would think it would be obvious.

And yeah, one would think this would be obvious? But I have belabored it because many discounted enterprise volume purchasing agreements still fail the R0ML's Ratio Bozo Test.2

In the case of clown noses, if you pay the discounted price, at least you get to keep the nose; maybe lightly-used clown noses have some resale value. But in software licensing or SaaS deals, once you've purchased the "discounted" software or service, once you have provisioned the "seats", the money is gone, and if your employees don't use it, then no value for your organization will ever result.

Measuring number of units used is very important. Without this number, you have no idea if you are a bozo or not.

It is often better to give your individual employees a corporate card and allow them to make arbitrary individual purchases of software licenses and SaaS tools, with minimal expense-reporting overhead; this will always keep R0ML's Ratio at 1.0, and thus, you will never be a bozo.

It is always better to do that the first time you are purchasing a new software tool, because the first time making such a purchase you (almost by definition) have no information about "units used" yet. You have no idea - you cannot have any idea - if you are a bozo or not.

If you don't know who the bozo is, it's probably you.

Acknowledgments

Thank you for reading, and especially thank you to my patrons who are supporting my writing on this blog. Of course, extra thanks to dad for, like, having this idea and doing most of the work here beyond my transcription. If you like my dad's ideas and you'd like to post more of them, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. One of my other favorite posts on this blog was just stealing another one of his ideas, so hopefully this one will be good too.

  2. This concept was first developed in 2001, but it has some implications for extremely recent developments in the software industry; but that's a post for another day.

09 Aug 2025 4:41am GMT

08 Aug 2025

feedPlanet Twisted

Glyph Lefkowitz: The Best Line Length

What's a good maximum line length for your coding standard?

This is, of course, a trick question. By posing it as a question, I have created the misleading impression that it is a question, but Black has selected the correct number for you; it's 88 which is obviously very lucky.

Thanks for reading my blog.


OK, OK. Clearly, there's more to it than that. This is an age-old debate on the level of "tabs versus spaces". So contentious, in fact, that even the famously opinionated Black does in fact let you change it.

Ancient History

One argument that certain silly people1 like to make is "why are we wrapping at 80 characters like we are using 80 character teletypes, it's the 2020s! I have an ultrawide monitor!". The implication here is that the width of 80-character terminals is an antiquated relic, based entirely around the hardware limitations of a bygone era, and modern displays can put tons of stuff on one line, so why not use that capability?

This feels intuitively true, given the huge disparity between ancient times and now: on my own display, I can comfortably fit about 350 characters on a line. What a shame, to have so much room for so many characters in each line, and to waste it all on blank space!

But... is that true?

I stretched out my editor window all the way to measure that '350' number, but I did not continue editing at that window width. In order to have a more comfortable editing experience, I switched back into writeroom mode, a mode which emulates a considerably more writerly application, which limits each line length to 92 characters, regardless of frame width.

You've probably noticed this too. Almost all sites that display prose of any kind limit their width, even on very wide screens.

As silly as that tiny little ribbon of text running down the middle of your monitor might look with a full-screened stereotypical news site or blog, if you full-screen a site that doesn't set that width-limit, although it makes sense that you can now use all that space up, it will look extremely, almost unreadably bad.

Blogging software does not set a column width limit on your text because of some 80-character-wide accident of history in the form of a hardware terminal.

Similarly, if you really try to use that screen real estate to its fullest for coding, and start editing 200-300 character lines, you'll quickly notice it starts to feel just a bit weird and confusing. It gets surprisingly easy to lose your place. Rhetorically the "80 characters is just because of dinosaur technology! Use all those ultrawide pixels!" talking point is quite popular, but practically people usually just want a few more characters worth of breathing room, maxing out at 100 characters, far narrower than even the most svelte widescreen.

So maybe those 80 character terminals are holding us back a little bit, but... wait a second. Why were the terminals 80 characters wide in the first place?

Ancienter History

As this lovely Software Engineering Stack Exchange post summarizes, terminals were probably 80 characters because teletypes were 80 characters, and teletypes were probably 80 characters because punch cards were 80 characters, and punch cards were probably 80 characters because that's just about how many typewritten characters fit onto one line of a US-Letter piece of paper.

Even before typewriters, consider the average newspaper: why do we call a regularly-occurring featured article in a newspaper a "column"? Because broadsheet papers were too wide to have only a single column; they would always be broken into multiple! Far more aggressive than 80 characters, columns in newspapers typically have 30 characters per line.

The first newspaper printing machines were custom designed and could have used whatever width they wanted, so why standardize on something so narrow?3

Science!

There has been a surprising amount of scientific research around this issue, but in brief, there's a reason here rooted in human physiology: when you read a block of text, you are not consciously moving your eyes from word to word like you're dragging a mouse cursor, repositioning continuously. Human eyes reading text move in quick bursts of rotation called "saccades". In order to quickly and accurately move from one line of text to another, the start of the next line needs to be clearly visible in the reader's peripheral vision in order for them to accurately target it. This limits the angle of rotation that the reader can perform in a single saccade, and, thus, the length of a line that they can comfortably read without hunting around for the start of the next line every time they get to the end.

So, 80 (or 88) characters isn't too unreasonable for a limit. It's longer than 30 characters, that's for sure!

But, surely that's not all, or this wouldn't be so contentious in the first place?

Caveats

The screen is wide, though.

The ultrawide aficionados do have a point, even if it's not really the simple one about "old terminals" they originally thought. Our modern wide-screen displays are criminally underutilized, particularly for text. Even adding in the big chunky file, class, and method tree browser over on the left and the source code preview on the right, a brief survey of a Google Image search for "vs code" shows a lot of editors open with huge, blank areas on the right side of the window.

Big screens are super useful as they allow us to leverage our spatial memories to keep more relevant code around and simply glance around as we think, rather than navigate interactively. But it only works if you remember to do it.

Newspapers allowed us to read a ton of information in one sitting with minimum shuffling by packing in as much as 6 columns of text. You could read a column to the bottom of the page, back to the top, and down again, several times.

Similarly, books fill both of their opposed pages with text at the same time, doubling the amount of stuff you can read at once before needing to turn the page.

You may notice that reading text in a book, even in an ebook app, is more comfortable than reading a ton of text by scrolling around in a web browser. That's because our eyes are built for saccades, and repeatedly tracking the continuous smooth motion of the page as it scrolls to a stop, then re-targeting the new fixed location to start saccading around from, is literally more physically strenuous on your eye's muscles!

There's a reason that the codex was a big technological innovation over the scroll. This is a regression!

Today, the right thing to do here is to make use of horizontally split panes in your text editor or IDE, and just make a bit of conscious effort to set up the appropriate code on screen for the problem you're working on. However, this is a potential area for different IDEs to really differentiate themselves, and build multi-column continuous-code-reading layouts that allow for buffers to wrap and be navigable newspaper-style.

Similar, modern CSS has shockingly good support for multi-column layouts, and it's a shame that true multi-column, page-turning layouts are so rare. If I ever figure out a way to deploy this here that isn't horribly clunky and fighting modern platform conventions like "scrolling horizontally is substantially more annoying and inconsistent than scrolling vertically" maybe I will experiment with such a layout on this blog one day. Until then… just make the browser window narrower so other useful stuff can be in the other parts of the screen, I guess.

Code Isn't Prose

But, I digress. While I think that columnar layouts for reading prose are an interesting thing more people should experiment with, code isn't prose.

The metric used for ideal line width, which you may have noticed if you clicked through some of those Wikipedia links earlier, is not "character cells in your editor window", it is characters per line, or "CPL".

With an optimal CPL somewhere between 45 and 95, a code-line-width of somewhere around 90 might actually be the best idea, because whitespace uses up your line-width budget. In a typical object-oriented Python program2, most of your code ends up indented by at least 8 spaces: 4 for the class scope, 4 for the method scope. Most likely a lot of it is 12, because any interesting code will have at least one conditional or loop. So, by the time you're done wasting all that horizontal space, a max line length of 90 actually looks more like a maximum of 78... right about that sweet spot from the US-Letter page in the typewriter that we started with.

What about soft-wrap?

In principle, source code is structured information, whose presentation could be fully decoupled from its serialized representation. Everyone could configure their preferred line width appropriate to their custom preferences and the specific physiological characteristics of their eyes, and the code could be formatted according to the language it was expressed in, and "hard wrapping" could be a silly antiquated thing.

The problem with this argument is the same as the argument against "but tabs are semantic indentation", to wit: nope, no it isn't. What "in principle" means in the previous paragraph is actually "in a fantasy world which we do not inhabit". I'd love it if editors treated code this way and we had a rich history and tradition of structured manipulations rather than typing in strings of symbols to construct source code textually. But that is not the world we live in. Hard wrapping is unfortunately necessary to integrate with diff tools.

So what's the optimal line width?

The exact, specific number here is still ultimately a matter of personal preference.

Hopefully, understanding the long history, science, and underlying physical constraints can lead you to select a contextually appropriate value for your own purposes that will balance ease of reading, integration with the relevant tools in your ecosystem, diff size, presentation in the editors and IDEs that your contributors tend to use, reasonable display in web contexts, on presentation slides, and so on.

But - and this is important - counterpoint:

No it isn't, you don't need to select an optimal width, because it's already been selected for you. It is 88.

Acknowledgments

Thank you for reading, and especially thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. I love the fact that this message is, itself, hard-wrapped to 77 characters.

  2. Let's be honest; we're all object-oriented python programmers here, aren't we?

  3. Unsurprisingly, there are also financial reasons. More, narrower columns meant it was easier to fix typesetting errors and to insert more advertisements as necessary. But readability really did have a lot to do with it, too; scientists were looking at ease of reading as far back as the 1800s.

08 Aug 2025 5:37am GMT