22 Mar 2019

feedPlanet Debian

Enrico Zini: debian-vote statistics

Updated: re-run on a mailbox with only the post-nomination discussion.

I made a script to compute some statistics on debian-vote's election discussions.

Here are the result as of 2019-03-22 14:30 UTC+1:

These are the number of mails sent by people who posted more than 2 messages:

Name                     Mails
==============================
Joerg Jaspert               15
Martin Michlmayr            12
Jonathan Carter             11
Andreas Tille                8
Sam Hartman                  8
Lucas Nussbaum               7
Jose Miguel Parrella         5
Ian Jackson                  3
Sean Whitton                 3

These are sum and averages of lines of non-quoted message text sent by people:

Name                     Sum   Avg
==================================
Jonathan Carter          659    60
Sam Hartman              389    49
Martin Michlmayr         355    30
Joerg Jaspert            336    22
Andreas Tille            254    32
Lucas Nussbaum           165    24
Ian Jackson              150    50
Jose Miguel Parrella     136    27
Sean Whitton              41    14

These are the top keywords of messages sent by the candidates so far, scored by an improvised TFIDF metric:

Sam Hartman
  people, valuable, doing, ways, focus, might, work
Jonathan Carter
  back, wiki, bold, brave, perhaps, developer, software
Joerg Jaspert
  upload, thing, good, nice, such, something, just
Martin Michlmayr
  believe, change, where, maybe, people, technical, they

22 Mar 2019 11:48am GMT

feedPlanet Python

Reuven Lerner: Seven ways to improve your team’s Python

If you're a manager, then you're always trying to find ways that'll help your team do more in less time. That's why you use Python - because it makes your developers more productive. They can spend more time creating new features, and less time debugging or maintaining existing code. It's no surprise that so many companies are moving to Python.

After you've moved to Python, you can still make your team more effective. That is, your organization can become more productive, combining technology and culture to help your developers improve. In such a scenario, everyone wins: Your company becomes more efficient and effective, and your team members are more satisfied.

Get the bonus content: Seven ways to improve your team's Python
Click Here

How can you improve your company's Python? I've prepared a 6-page white paper on the subject, describing a variety of techniques I've both seen and used with companies around the world. It's free to download; I hope that you'll read it and adopt some of the techniques I've listed here.

Thoughts or feedback on what I've written? You can always e-mail me at reuven@lerner.co.il., or contact me on Twitter as @reuvenmlerner. I'll be happy to hear your thoughts!

The post Seven ways to improve your team's Python appeared first on Reuven Lerner's Blog.

22 Mar 2019 10:22am GMT

21 Mar 2019

feedPlanet Debian

Simon Josefsson: Offline Ed25519 OpenPGP key with subkeys on FST-01G running Gnuk

Below I describe how to generate an OpenPGP key and import it to a FST-01G device running Gnuk. See my earlier post on planning for my new OpenPGP key and the post on preparing the FST-01G to run Gnuk. For comparison with a RSA/YubiKey based approach, you can read about my setup from 2014.

Most of the steps below are covered by the Gnuk manual. The primary complication for me is the use of a offline machine and storing GnuPG directory stored on a USB memory device.

Offline machine

I use a laptop that is not connected to the Internet and boot it from a read-only USB memory stick. Finding a live CD that contains the necessary tools for using GnuPG with smartcards (gpg-agent, scdaemon, pcscd) is significantly harder than it should be. Using a rarely audited image begs the question of whether you can trust it. A patched kernel/gpg to generate poor randomness would be an easy and hard to notice hack. I'm using the PGP/PKI Clean Room Live CD. Recommendations on more widely used and audited alternatives would be appreciated. Select "Advanced Options" and "Run Shell" to escape the menus. Insert a new USB memory device, and prepare it as follows:

pgp@pgplive:/home/pgp$ sudo wipefs -a /dev/sdX
pgp@pgplive:/home/pgp$ sudo fdisk /dev/sdX
# create a primary partition of Linux type
pgp@pgplive:/home/pgp$ sudo mkfs.ext4 /dev/sdX1
pgp@pgplive:/home/pgp$ sudo mount /dev/sdX1 /mnt
pgp@pgplive:/home/pgp$ sudo mkdir /mnt/gnupghome
pgp@pgplive:/home/pgp$ sudo chown pgp.pgp /mnt/gnupghome
pgp@pgplive:/home/pgp$ sudo chmod go-rwx /mnt/gnupghome

GnuPG configuration

Set your GnuPG home directory to point to the gnupghome directory on the USB memory device. You will need to do this in every terminal windows you open that you want to use GnuPG in.

pgp@pgplive:/home/pgp$ export GNUPGHOME=/mnt/gnupghome
pgp@pgplive:/home/pgp$

At this point, you should be able to run gpg --card-status and get output from the smartcard.

Create master key

Create a master key and make a backup copy of the GnuPG home directory with it, together with an export ASCII version.

pgp@pgplive:/home/pgp$ gpg --quick-gen-key "Simon Josefsson <simon@josefsson.org>" ed25519 sign 216d
gpg: keybox '/mnt/gnupghome/pubring.kbx' created
gpg: /mnt/gnupghome/trustdb.gpg: trustdb created
gpg: key D73CF638C53C06BE marked as ultimately trusted
gpg: directory '/mnt/gnupghome/openpgp-revocs.d' created
gpg: revocation certificate stored as '/mnt/gnupghome/openpgp-revocs.d/B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE.rev'
pub   ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid                      Simon Josefsson <simon@josefsson.org>

pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/masterkey.txt
pgp@pgplive:/home/pgp$ sudo cp -a $GNUPGHOME $GNUPGHOME-backup-masterkey
pgp@pgplive:/home/pgp$ 

Create subkeys

Create subkeys and make a backup of them too, as follows.

pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE cv25519 encr 216d
pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE ed25519 auth 216d
pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE ed25519 sign 216d
pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/mastersubkeys.txt
pgp@pgplive:/home/pgp$ gpg -a --export-secret-subkeys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/subkeys.txt
pgp@pgplive:/home/pgp$ sudo cp -a $GNUPGHOME $GNUPGHOME-backup-mastersubkeys
pgp@pgplive:/home/pgp$ 

Move keys to card

Prepare the card by setting Admin PIN, PIN, your full name, sex, login account, and key URL as you prefer, following the Gnuk manual on card personalization.

Move the subkeys from your GnuPG keyring to the FST01G using the keytocard command.

Take a final backup - because moving the subkeys to the card modifes the local GnuPG keyring - and create a ASCII armored version of the public key, to be transferred to your daily machine.

pgp@pgplive:/home/pgp$ gpg --list-secret-keys
/mnt/gnupghome/pubring.kbx
--------------------------
sec   ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid           [ultimate] Simon Josefsson <simon@josefsson.org>
ssb>  cv25519 2019-03-20 [E] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [A] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [S] [expires: 2019-10-22]

pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/masterstubs.txt
pgp@pgplive:/home/pgp$ gpg -a --export-secret-subkeys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/subkeysstubs.txt
pgp@pgplive:/home/pgp$ gpg -a --export B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/publickey.txt
pgp@pgplive:/home/pgp$ cp -a $GNUPGHOME $GNUPGHOME-backup-masterstubs
pgp@pgplive:/home/pgp$ 

Transfer to daily machine

Copy publickey.txt to your day-to-day laptop and import it and create stubs using --card-status.

jas@latte:~$ gpg --import < publickey.txt 
gpg: key D73CF638C53C06BE: public key "Simon Josefsson <simon@josefsson.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ gpg --card-status

Reader ...........: Free Software Initiative of Japan Gnuk (FSIJ-1.2.14-67252015) 00 00
Application ID ...: D276000124010200FFFE672520150000
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 67252015
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
Signature key ....: A3CC 9C87 0B9D 310A BAD4  CF2F 5172 2B08 FE47 45A2
      created ....: 2019-03-20 23:40:49
Encryption key....: A9EC 8F4D 7F1E 50ED 3DEF  49A9 0292 3D7E E76E BD60
      created ....: 2019-03-20 23:40:26
Authentication key: CA7E 3716 4342 DF31 33DF  3497 8026 0EE8 A9B9 2B2B
      created ....: 2019-03-20 23:40:37
General key info..: sub  ed25519/51722B08FE4745A2 2019-03-20 Simon Josefsson <simon@josefsson.org>
sec   ed25519/D73CF638C53C06BE  created: 2019-03-20  expires: 2019-10-22
ssb>  cv25519/02923D7EE76EBD60  created: 2019-03-20  expires: 2019-10-22
                                card-no: FFFE 67252015
ssb>  ed25519/80260EE8A9B92B2B  created: 2019-03-20  expires: 2019-10-22
                                card-no: FFFE 67252015
ssb>  ed25519/51722B08FE4745A2  created: 2019-03-20  expires: 2019-10-22
                                card-no: FFFE 67252015
jas@latte:~$ 

Before the key can be used after the import, you must update the trust database for the secret key.

Now you should have a offline master key with subkey stubs. Note in the output below that the master key is not available (sec#) and the subkeys are stubs for smartcard keys (ssb>).

jas@latte:~$ gpg --list-secret-keys
sec#  ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid           [ultimate] Simon Josefsson <simon@josefsson.org>
ssb>  cv25519 2019-03-20 [E] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [A] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [S] [expires: 2019-10-22]

jas@latte:~$

If your environment variables are setup correctly, SSH should find the authentication key automatically.

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzCFcHHrKzVSPDDarZPYqn89H5TPaxwcORgRg+4DagE cardno:FFFE67252015

GnuPG and SSH are now ready to be used with the new key. Thanks for reading!

21 Mar 2019 8:45pm GMT

Simon Josefsson: Installing Gnuk on FST-01G running NeuG

The FST-01G device that you order from the FSF shop runs NeuG. To be able to use the device as a OpenPGP smartcard, you need to install Gnuk. While Niibe covers this on his tutorial, I found the steps a bit complicated to follow. The following guides you from buying the device to getting a FST-01G running Gnuk ready for use with GnuPG.

Once you have received the device and inserted it into a USB port, your kernel log (sudo dmesg) will show something like the following:

[628772.874658] usb 1-1.5.1: New USB device found, idVendor=234b, idProduct=0004
[628772.874663] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[628772.874666] usb 1-1.5.1: Product: Fraucheky
[628772.874669] usb 1-1.5.1: Manufacturer: Free Software Initiative of Japan
[628772.874671] usb 1-1.5.1: SerialNumber: FSIJ-0.0
[628772.875204] usb-storage 1-1.5.1:1.0: USB Mass Storage device detected
[628772.875452] scsi host6: usb-storage 1-1.5.1:1.0
[628773.886539] scsi 6:0:0:0: Direct-Access     FSIJ     Fraucheky        1.0  PQ: 0 ANSI: 0
[628773.887522] sd 6:0:0:0: Attached scsi generic sg2 type 0
[628773.888931] sd 6:0:0:0: [sdb] 128 512-byte logical blocks: (65.5 kB/64.0 KiB)
[628773.889558] sd 6:0:0:0: [sdb] Write Protect is off
[628773.889564] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00
[628773.890305] sd 6:0:0:0: [sdb] No Caching mode page found
[628773.890314] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[628773.902617]  sdb:
[628773.906066] sd 6:0:0:0: [sdb] Attached SCSI removable disk

The device comes up as a USB mass storage device. Conveniently, it contain documentation describing what it is, and you identify the version of NeuG it runs as follows.

jas@latte:~/src/gnuk$ head /media/jas/Fraucheky/README 
NeuG - a true random number generator implementation (for STM32F103)

                                                          Version 1.0.7
                                                             2018-01-19
                                                           Niibe Yutaka
                                      Free Software Initiative of Japan

To convert the device into the serial-mode that is required for the software upgrade, use the eject command for the device (above it came up as /dev/sdb): sudo eject /dev/sdb. The kernel log will now contain something like this:

[628966.847387] usb 1-1.5.1: reset full-speed USB device number 27 using ehci-pci
[628966.955723] usb 1-1.5.1: device firmware changed
[628966.956184] usb 1-1.5.1: USB disconnect, device number 27
[628967.115322] usb 1-1.5.1: new full-speed USB device number 28 using ehci-pci
[628967.233272] usb 1-1.5.1: New USB device found, idVendor=234b, idProduct=0001
[628967.233277] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[628967.233280] usb 1-1.5.1: Product: NeuG True RNG
[628967.233283] usb 1-1.5.1: Manufacturer: Free Software Initiative of Japan
[628967.233286] usb 1-1.5.1: SerialNumber: FSIJ-1.0.7-67252015
[628967.234034] cdc_acm 1-1.5.1:1.0: ttyACM0: USB ACM device

The strings NeuG True RNG and FSIJ-1.0.7 suggest it is running NeuG version 1.0.7.

Now both Gnuk itself and reGNUal needs to be built, as follows. If you get any error message, you likely don't have the necessary dependencies installed.

jas@latte:~/src$ git clone https://salsa.debian.org/gnuk-team/gnuk/neug.git
jas@latte:~/src$ git clone https://salsa.debian.org/gnuk-team/gnuk/gnuk.git
jas@latte:~/src$ cd gnuk/src/
jas@latte:~/src/gnuk/src$ git submodule update --init
jas@latte:~/src/gnuk/src$ ./configure --vidpid=234b:0000
...
jas@latte:~/src/gnuk/src$ make
...
jas@latte:~/src/gnuk/src$ cd ../regnual/
jas@latte:~/src/gnuk/regnual$ make
jas@latte:~/src/gnuk/regnual$ cd ../../

You are now ready to flash the device, as follows.

jas@latte:~/src$ sudo neug/tool/neug_upgrade.py -f gnuk/regnual/regnual.bin gnuk/src/build/gnuk.bin 
gnuk/regnual/regnual.bin: 4544
gnuk/src/build/gnuk.bin: 113664
CRC32: 931cab51

Device: 
Configuration: 1
Interface: 1
20000e00:20005000
Downloading flash upgrade program...
start 20000e00
end   20001f00
# 20001f00: 31 : 196
Run flash upgrade program...
Wait 3 seconds...
Device: 
08001000:08020000
Downloading the program
start 08001000
end   0801bc00
jas@latte:~/src$ 

Remove and insert the device and the kernel log should contain something like this:

[629120.399875] usb 1-1.5.1: new full-speed USB device number 32 using ehci-pci
[629120.511003] usb 1-1.5.1: New USB device found, idVendor=234b, idProduct=0000
[629120.511008] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[629120.511011] usb 1-1.5.1: Product: Gnuk Token
[629120.511014] usb 1-1.5.1: Manufacturer: Free Software Initiative of Japan
[629120.511017] usb 1-1.5.1: SerialNumber: FSIJ-1.2.14-67252015

The device can now be used with GnuPG as a smartcard device.

jas@latte:~/src/gnuk$ gpg --card-status
Reader ...........: 234B:0000:FSIJ-1.2.14-67252015:0
Application ID ...: D276000124010200FFFE672520150000
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 67252015
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
jas@latte:~/src/gnuk$ 

Congratulations!

21 Mar 2019 8:39pm GMT

feedPlanet Grep

Xavier Mertens: OSSEC Conference 2019 Wrap-Up

I'm in Washington, waiting for my flight back to Belgium. I just attended the 2019 edition of the OSSEC Conference, well more precisely, close to Washington in Herndon, VA. This was my first one and I've been honoured to be invited to speak at the event. OSSEC is a very nice project that I'm using for a long time. I also contributed to it and I'm giving training on this topic. The conference was already organized for a few years and attracted more people every year. They doubled the number of attendees for the 2019 edition.

The opening session was performed by Scott Shinn, OSSEC Project Manager, who came with some recap. The project started in 2003 and was first released in 2005. It supports a lot of different environments and, basically, if you can compile C code on your device, it can run OSSEC! Some interesting facts were presented by Scott. What is the state of the project? OSSEC is alive with 500K downloads in 2018 and trending up. A survey is still ongoing but already demonstrates that many users are long-term users (31% are using OSSEC for >5y). If the top user profile remains based on infosec people, the second profile is IT operations and devops. There is now an OSSEC foundation (503c - a non-profit organization) which has multiple goals: to promote OSSEC, a bug bounty will probably be started, to attract more developers and to enforce the project. There is an ongoing effort to make the tool more secure with an external audit of the code.

Then, Daniel Cid presented his keynote. Daniel is the OSSEC founder and reviewed the story of his baby. Like many of us, he was facing problems in his daily job and did not find the proper tool. So he started to develop OSSEC. There was already some tools here and there like Owl, Syscheck or OSHIDS. Daniel integrated them and added a network layer and the agent/server model. He reviewed the very first versions from the 0.1 until 0.7. Funny story, some people asked him to stop flooding the mailing where he announced all the versions and suggested him to contribute to the project 'Tripwire'.

Then, Scott came back on stage to talk about the Future of OSSEC. Some times, when I mention OSSEC, people' first reaction is to argue that OSSEC does not improve or does not have clear roadmap. Really? Scott give a nice overview of what's coming soon. Here is a quick list:

Most of these new features should be available in OSSEC 3.3.

The next presentation was about "Protecting Workloads in Google Kubernetes with OSSEC and Google Cloud Armor" by Ben Auch and Joe Miller, Gannett working at USA Today. This media company operates a huge network with 140M unique visitors monthly, 120 markets in the US and a worldwide presence. As a media company, there are often targeted (defacement, information change, fake news, etc). Ben & Joe explained how they successfully deployed OSSEC in their cloud infrastructure to automatically block attackers with a bunch of Active-Response scripts. The biggest challenge was to be able to remain independent of the cloud provider and to access logs in a simple but effective way.Detect malicious requests to GKE containers

Mike Shinn, from Atomicorp, came to speak about "Real Time Threat Intelligence for Advanced Detection". Atomicorp, the organizer of the conference, is providing OSSEC professional services and is also working on extensions. Mike demonstrated what he called "the next-generation Active-Response". Today, this OSSEC feature accesses data from CDB but it's not real-time. The idea is to collect data from OSSEC agents installed in multiple locations, multiple organizations (similar to what dshield.org is doing) and to apply some machine-learning magic. The idea is also to replace the CDB lookup mechanism by something more powerful and in real time: via DNS lookups. Really interesting approach!

Ben Brooks, from Beryllium Infosec, presented "A Person Behind Every Event". This talk was not directly related to OSSEC but interesting anyway. Tools like OSSEC are working with rules and technical information - IP addressds, files, URLs, but what about the people behind those alerts? Are we facing real attackers or rogue insides? Who's the most critical? The presentation was focussed on the threat intelligencecycle:
Direction > Collection > Processing > Analysis > DesseminationBof

The next two talks had the same topic: automation. Ken Moini from Fierce Software Automation, presented "Automating Security Across the Enterprise with Ansible and OSSEC". The idea behind the talk was to solve the problems that most organizations are facing: people problems (skills gaps), point tools (proliferation of tools and vendors solutions), pace of innovation. Mike Waite, from RedHat, spoke about "Containerized software for a modern world, The good, the bad and the ugly". A few years ago, the ecosystem was based on many Linux flavors. Today, we have the same issue but with many flavours of Kubernetes. It's all about applications. If applications can be easily deployed, software vendors are becoming also Linux maintainers!

The next presentation was performed by Andrew Hay, from LEO Cybersecurity: "Managing Multi-Cloud OSSEC Deployments". Andrew is a long OSSEC advocate and co-wrote the book "OSSEC HIDS Host Based Intrusion Detection Guide" with Daniel Cid. He presented tips & tricks to deploy OSSEC in cloud services, how to generate configuration files with automation tools like Chef, Puppet or Ansible.

Mike Shinn came back with "Atomic Workload Protection". Yesterday, organizations' business was based on a secure network of servers. Tomorrow, we'll have to use a network of secure workloads. Workloads must be security and cloud providers can't do everything for us. Cloud providers take care of the cloud security but the security IN the cloud relies on their customers! Gartner said that, by 2023, 99% of the cloud security failures will be customer's fault. Mike explained how Atomicorp developed extra layers on top of OSSEC to secure workloads: Hardening, Vulnerability shielding, Memory protection, Application control, Behavioral Monitoring, Micro segmentation, Deception and AV/Antimalware.

The next slot was assigned to myself, I presented "Threat Hunting with OSSEC".

Finally, the last presentation was the one of Dmitry Dain who presented the NoiseSocket that will be implemented in the next OSSEC release. The day ended with a quick OSSEC Users panel and a nice social event.

The second day was mainly a workshop. Scott prepared some exercises to demonstrate how to use some existing features of OSSEC (FIM, Active-Response) but also the new feature called "Dynamic Decoder" (see above). I met a lot of new people who are all OSSEC users or contributors.

[The post OSSEC Conference 2019 Wrap-Up has been first published on /dev/random]

21 Mar 2019 7:06pm GMT

feedPlanet Python

Test and Code: 69: The Pragmatic Programmer - Andy Hunt

Andy Hunt and Dave Thomas wrote the seminal software development book, The Pragmatic Programmer. Together they founded The Pragmatic Programmers and are well known as founders of the agile movement and authors of the Agile Manifesto. They founded the Pragmatic Bookshelf publishing business in 2003.

The Pragmatic Bookshelf published it's most important book, in my opinion, in 2017 with the first pytest book available from any publisher.

Topics:

Special Guest: Andy Hunt.

Sponsored By:

Support Test & Code - Software Testing, Development, Python

<p>Andy Hunt and Dave Thomas wrote the seminal software development book, The Pragmatic Programmer. Together they founded The Pragmatic Programmers and are well known as founders of the agile movement and authors of the Agile Manifesto. They founded the Pragmatic Bookshelf publishing business in 2003. </p> <p>The Pragmatic Bookshelf published it&#39;s most important book, in my opinion, in 2017 with the first <a href="https://pragprog.com/book/bopytest/python-testing-with-pytest" rel="nofollow">pytest book</a> available from any publisher.</p> <p>Topics:</p> <ul> <li><a href="https://pragprog.com/book/tpp/the-pragmatic-programmer" rel="nofollow">The Pragmatic Programmer</a>, the book</li> <li>The <a href="https://agilemanifesto.org/" rel="nofollow">Manifesto for Agile Software Development</a></li> <li>Agile methodologies and lightweight methods</li> <li>Some issues with &quot;Agile&quot; as it is now.</li> <li>The <a href="https://growsmethod.com/" rel="nofollow">GROWS Method</a> </li> <li><a href="https://pragprog.com/" rel="nofollow">Pragmatic Bookshelf</a>, the publishing company</li> <li>How Pragmatic Bookshelf is different, and <a href="http://write-for-us.pragprog.com/" rel="nofollow">what it&#39;s like to be an author</a> with them.</li> <li>Reading and writing sci-fi novels, including <a href="https://conglommora.com/" rel="nofollow">Conglommora</a>, Andy&#39;s novels.</li> <li><a href="https://andyhunt.bandcamp.com/" rel="nofollow">Playing music</a>.</li> </ul><p>Special Guest: Andy Hunt.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="http://testandcode.com/pycharm">PyCharm Professional</a>: <a rel="nofollow" href="http://testandcode.com/pycharm">Try PyCharm Pro for an extended 4 month trial before deciding which version you need. If you value your time, you owe it to yourself to try PyCharm.</a> Promo Code: TESTNCODE2019</li></ul><p><a rel="payment" href="https://www.patreon.com/testpodcast">Support Test &amp; Code - Software Testing, Development, Python</a></p>

21 Mar 2019 5:00pm GMT

Caktus Consulting Group: Coding for Time Zones & Daylight Saving Time — Oh, the Horror

In this post, I review some reasons why it's really difficult to program correctly when using times, dates, time zones, and daylight saving time, and then I'll give some advice for working with them in Python and Django. Also, I'll go over why I hate daylight saving time (DST).

TIME ZONES

Let's start with some problems with time zones, because they're bad enough even before we consider DST, but they'll help us ease into it.

Time Zones Shuffle

Time zones are a human invention, and humans tend to change their minds, so time zones also change over time.

Many parts of the world struggle with time changes. For example, let's look at the Pacific/Apia time zone, which is the time zone of the independent country of Samoa. Through December 29, 2011, it was -11 hours from Coordinated Universal Time (UTC). From December 31, 2011, Pacific/Apia became +13 hours from UTC.

What happened on December 30, 2011? Well, it never happened in Samoa, because December 29, 23:59:59-11:00 is followed immediately by December 31, 0:00:00+13:00.

Date Time Zone Date Time Zone
2011-12-29 23:59:59 UTC-11 2011-12-30 10:59:59 UTC
2018-12-31 00:00:00 UTC+13 2011-12-30 11:00:00 UTC

That's an extreme example, but time zones change more often than you might think, often due to changes in government or country boundaries.

The bottom line here is that even knowing the time and time zone, it's meaningless unless you also know the date.

Always Convert to UTC?

As programmers, we're encouraged to avoid issues with time zones by "converting" times to UTC (Coordinated Universal Time) as early as possible, and convert to the local time zone only when necessary to display times to humans. But there's a problem with that.

If all you care about is the exact moment in the lifetime of the universe when an event happened (or is going to happen), then that advice is fine. But for humans, the time zone that they expressed a time in can be important, too.

For example, suppose I'm in North Carolina, in the eastern time zone, but I'm planning an event in Memphis, which is in the central time zone. I go to my calendar program and carefully enter the date and "3:00 p.m. CST". The calendar follows the usual convention and converts my entry to UTC by adding 6 hours, so the time is stored as 9:00 p.m. UTC, or 21:00 UTC. If the calendar uses Django, there's not even any extra code needed for the conversion, because Django does it automatically.

The next day I look at my calendar to continue working on my event. The event time has been converted to my local time zone, or eastern time, so the calendar shows the event happening at "4:00 p.m." (instead of the 3:00 p.m. that it should be). The conversion is not useful for me, because I want to plan around other events in the location where the event is happening, which is using CST, so my local time zone is irrelevant.

The bottom line is that following the advice to always convert times to UTC results in lost information. We're sometimes better off storing times with their non-UTC time zones. That's why it's kind of annoying that Django always "converts" local times to UTC before saving to the database, or even before returning them from a form. That means the original timezone is lost unless you go to the trouble of saving it separately and then converting the time from the database back to that time zone after you get it from the database. I wrote about this before.

By the way, I've been putting "convert" in scare quotes because talking about converting times from one time zone to another carries an implicit assumption that such converting is simple and loses no information, but as we see, that's not really true.

DAYLIGHT SAVING TIME

Daylight saving time (DST) is even more of a human invention than time zones.

Time zones are a fairly obvious adaptation to the conflict between how our bodies prefer to be active during the hours when the sun is up, and how we communicate time with people in other parts of the world. Historical changes in time zones across the years are annoying, but since time zones are a human invention it's not surprising that we'd tweak them every now and then.

DST, on the other hand, amounts to changing entire time zones twice every year. What does US/eastern time zone mean? I don't know, unless you tell me the date. From January 1, 2018 to March 10, 2018, it meant UTC-5. From March 11, 2018 to November 3, 2018, it meant UTC-4. And from November 4, 2018 to December 31, 2018, it's UTC-5 again.

But it gets worse. From Wikipedia:

The Uniform Time Act of 1966 ruled that daylight saving time would run from the last Sunday of April until the last Sunday in October in the United States. The act was amended to make the first Sunday in April the beginning of daylight saving time as of 1987. The Energy Policy Act of 2005 extended daylight saving time in the United States beginning in 2007. So local times change at 2:00 a.m. EST to 3:00 a.m. EDT on the second Sunday in March and return at 2:00 a.m. EDT to 1:00 a.m. EST on the first Sunday in November.

So in a little over 50 years, the rules changed 3 times.

Even if you have complete and accurate information about the rules, daylight saving time complicates things in surprising ways. For example, you can't convert 2:30 a.m. March 11, 2018. in US/eastern time zone to UTC, because that time never happened - our clocks had to jump directly from 1:59:59 a.m. to 3:00:00 a.m. See below:

Date Time Zone Date Time Zone
2018-03-11 1:59:59 EST 2018-03-11 6:59:59 UTC
2018-03-11 3:00:00 EDT 2018-03-11 7:00:00 UTC

You can't convert 1:30 a.m. November 4, 2018, in US/eastern time zone to UTC either, because that time happened twice. You would have to specify whether it was 1:30 a.m. November 4, 2018 EDT or 1:30 a.m. November 4, 2018 EST:

Date Time Zone Date Time Zone
2018-11-04 1:00:00 EDT 2018-11-04 5:00:00 UTC
2018-11-04 1:30:00 EDT 2018-11-04 5:30:00 UTC
2018-11-04 1:59:59 EDT 2018-11-04 5:59:59 UTC
2018-11-04 1:00:00 EST 2018-11-04 6:00:00 UTC
2018-11-04 1:30:00 EST 2018-11-04 6:30:00 UTC
2018-11-04 1:59:59 EST 2018-11-04 6:59:59 UTC

Advice on How to Properly Manage datetimes

Here are some rules I try to follow.

When working in Python, never use naive datetimes. (Those are datetime objects without timezone information, which unfortunately are the default in Python, even in Python 3.)

Use the pytz library when constructing datetimes, and review the documentation frequently. Properly managing datetimes is not always intuitive, and using pytz doesn't prevent me from using it incorrectly and doing things that will provide the wrong results only for some inputs, making it really hard to spot bugs. I have to triple-check that I'm following the docs when I write the code and not rely on testing to find problems.

Let me strengthen that even further. It is not possible to correctly construct datetimes with timezone information using only Python's own libraries when dealing with timezones that use DST. I must use pytz or something equivalent.

If I'm tempted to use datetime.replace, I need to stop, think hard, and find another way to do it. datetime.replace is almost always the wrong approach, because changing one part of a datetime without consideration of the other parts is almost guaranteed to not do what I expect for some datetimes.

When using Django, be sure USE_TZ = True. If Django emits warnings about naive datetimes being saved in the database, treat them as if they were fatal errors, track them down, and fix them. If I want to, I can even turn them into actual fatal errors; see this Django documentation.

When processing user input, consider whether a datetime's original timezone needs to be preserved, or if it's okay to just store the datetime as UTC. If the original timezone is important, see this post I wrote about how to get and store it.

Conclusion

Working with human times correctly is complicated, unintuitive, and needs a lot of careful attention to detail to get right. Further, some of the oft-given advice, like always working in UTC, can cause problems of its own.

21 Mar 2019 4:45pm GMT

feedPlanet Grep

Dries Buytaert: JSON:API lands in Drupal core

JSON:API being dropped into Drupal by crane

Breaking news: we just committed the JSON:API module to the development branch of Drupal 8.

In other words, JSON:API support is coming to all Drupal 8 sites in just a few short months! 🎉

This marks another important milestone in Drupal's evolution to be an API-first platform optimized for building both coupled and decoupled applications.

With JSON:API, developers or content creators can create their content models in Drupal's UI without having to write a single line of code, and automatically get not only a great authoring experience, but also a powerful, standards-compliant, web service API to pull that content into JavaScript applications, digital kiosks, chatbots, voice assistants and more.

When you enable the JSON:API module, all Drupal entities such as blog posts, users, tags, comments and more become accessible via the JSON:API web service API. JSON:API provides a standardized API for reading and modifying resources (entities), interacting with relationships between resources (entity references), fetching of only the selected fields (e.g. only the "title" and "author" fields), including related resources to avoid additional requests (e.g. details about the content's author) and filtering, sorting and paginating collections of resources.

In addition to being incredibly powerful, JSON:API is easy to learn and use and uses all the tooling we already have available to test, debug and scale Drupal sites.

Drupal's JSON:API implementation was years in the making

Development of the JSON:API module started in May 2016 and reached a stable 1.0 release in May 2017. Most of the work was driven by a single developer partially in his free time: Mateu Aguiló Bosch (e0ipso).

After soliciting input and consulting others, I felt JSON:API belonged in Drupal core. I first floated this idea in July 2016, became more convinced in December 2016 and recommended that we standardize on it in October 2017.

This is why at the end of 2017, I asked Wim Leers and Gabe Sullice - as part of their roles at Acquia - to start devoting the majority of their time to getting JSON:API to a high level of stability.

Wim and Gabe quickly became key contributors alongside Mateu. They wrote hundreds of tests and added missing features to make sure we guarantee strict compliance with the JSON:API specification.

A year later, their work culminated in a JSON:API 2.0 stable release on January 7th, 2019. The 2.0 release marked the start of the module's move to Drupal core. After rigorous reviews and more improvements, the module was finally committed to core earlier today.

From beginning to end, it took 28 months, 450 commits, 32 releases and more than 5,500 test runs.

The best JSON:API implementation in existence

The JSON:API module for Drupal is almost certainly the most feature-complete and easiest-to-use JSON:API implementation in existence.

The Drupal JSON:API implementation supports every feature of the JSON:API 1.0 specification out-of-the-box. Every Drupal entity (a resource object in JSON:API terminology) is automatically made available through JSON:API. Existing access controls for both reading and writing are respected. Both translations and revisions of entities are also made available. Furthermore, querying entities (filtering resource collections in JSON:API terminology) is possible without any configuration (e.g. setting up a "Drupal View"), which means front-end developers can get started on their work right away.

What is particularly rewarding is that all of this was made possible thanks to Drupal's data model and introspection capabilities. Drupal's decade-old Entity API, Field API, Access APIs and more recent Configuration and Typed Data APIs exist as an incredibly robust foundation for making Drupal's data available via web service APIs. This is not to be understated, as it makes the JSON:API implementation robust, deeply integrated and elegant.

I want to extend a special thank you to the many contributors that contributed to the JSON:API module and that helped make it possible for JSON:API to be added to Drupal 8.7.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for co-authoring this blog post and to Mateu Aguiló Bosch (e0ipso) (Lullabot), Preston So (Acquia), Alex Bronstein (Acquia) for their feedback during the writing process.

21 Mar 2019 1:25pm GMT

Wim Leers: JSON:API shipping with Drupal 8.7!

The JSON:API module was added to Drupal 8.7 as a stable module!

See Dries' overview of why this is an important milestone for Drupal, a look behind the scenes and a look toward the future. Read that first!

Upgrading?

As Mateu said, this is the first time a new module is added to Drupal core as "stable" (non-experimental) from day one. This was the plan since July 2018 - I'm glad we delivered on that promise.

This means users of the JSON:API 8.x-2.x contrib module currently on Drupal 8.5 or 8.6 can update to Drupal 8.7 on its release day and simply delete their current contributed module, and have no disruption in their current use of JSON:API, nor in security coverage! 1

What's happened lately?

The last JSON:API update was exactly two months ago, because … ever since then Gabe, Mateu and I are have been working very hard to get JSON:API through the core review process. This resulted in a few notable improvements:

  1. a read-only mode that is turned on by default for new installs - this strikes a nice balance between DX (still having data available via APIs by default/zero config: reading is probably the 80% use case, at least today) and minimizing risk (not allowing writes by default) 2
  2. auto-revisioning when PATCHing for eligible entity types
  3. formally documented & tested revisions and translations support 3
  4. formally documented security considerations

Get these improvements today by updating to version 2.4 of the JSON:API module - it's identical to what was added to Drupal 8.7!

Contributors

An incredible total of 103 people contributed in JSON:API's issue queue to help make this happen, and 50 of those even have commits to their name:

Wim Leers, ndobromirov, e0ipso, nuez, gabesullice, xjm, effulgentsia, seanB, jhodgdon, webchick, Dries, andrewmacpherson, jibran, larowlan, Gábor Hojtsy, benjifisher, phenaproxima, ckrina, dww, amateescu, voleger, plach, justageek, catch, samuel.mortenson, berdir, zhangyb, killes@www.drop.org, malik.kotob, pfrilling, Grimreaper, andriansyahnc, blainelang, btully, ebeyrent, garphy, Niklan, joelstein, joshua.boltz, govind.maloo, tstoeckler, hchonov, dawehner, kristiaanvandeneynde, dagmar, yobottehg, olexyy.mails@gmail.com, keesee, caseylau, peterdijk, mortona2k, jludwig, pixelwhip, abhisekmazumdar, izus, Mile23, mglaman, steven.wichers, omkar06, haihoi2, axle_foley00, hampercm, clemens.tolboom, gargsuchi, justafish, sonnykt, alexpott, jlscott, DavidSpiessens, BR0kEN, danielnv18, drpal, martin107, balsama, nileshlohar, gerzenstl, mgalalm, tedbow, das-peter, pwolanin, skyredwang, Dave Reid, mstef, bwinett, grndlvl, Spleshka, salmonek, tom_ek, huyby, mistermoper, jazzdrive3, harrrrrrr, Ivan Berezhnov, idebr, mwebaze, dpolant, dravenk, alan_blake, jonathan1055, GeduR, kostajh, pcambra, meba, dsdeiz, jian he, matthew.perry.

Thanks to all of you!

Future JSON:API blogging

I blogged about once a month since October 2018 about JSON:API, to get more people to switch to version 2.x of the JSON:API module, to ensure it was maximally mature and bug free prior to going into Drupal core. New capabilities were also being added at a pretty high pace because we'd been preparing the code base for that months prior. We went from ~1700 installs in January to ~2700 today!

Now that it is in Drupal core, there will be less need for frequent updates, and I think the API-First Drupal: what's new in 8.next? blog posts that I have been doing probably make more sense. I will do one of those when Drupal 8.7.0 is released in May, because not only will it ship with JSON:API land, there are also other improvements!

Special thanks to Mateu Aguiló Bosch (e0ipso) for their feedback!


  1. We'll of course continue to provide security releases for the contributed module. Once Drupal 8.7 is released, the Drupal Security Team stops supporting Drupal 8.5. At that time, the JSON:API contributed module will only need to provide security support for Drupal 8.6. Once Drupal 8.8 is released at the end of 2019, the JSON:API contributed module will no longer be supported: since JSON:API will then be part of both Drupal 8.7 and 8.8, there is no reason for the contributed module to continue to be supported. ↩︎

  2. Existing sites will continue to have writes enabled by default, but can choose to enable the read-only mode too. ↩︎

  3. Limitations in the underlying Drupal core APIs prevent JSON:API from 100% of desired capabilities, but with JSON:API now being in core, it'll be much easier to make the necessary changes happen! ↩︎

21 Mar 2019 1:20pm GMT

20 Mar 2019

feedPlanet Lisp

Lispers.de: Berlin Lispers Meetup, Monday, 25th March 2019

We meet again on Monday 8pm, 25th March. Our host this time is James Anderson (www.dydra.com).

Berlin Lispers is about all flavors of Lisp including Common Lisp, Scheme, Dylan, Clojure.

We will have a talk this time. Ingo Mohr will tell us "About the Unknown East of the Ancient LISP World. History and Thoughts. Part I: LISP on Punchcards".

We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.

20 Mar 2019 10:30am GMT

07 Mar 2019

feedPlanet Lisp

Quicklisp news: March 2019 Quicklisp dist update now available

New projects:

Updated projects: agnostic-lizard, april, big-string, binfix, cepl, chancery, chirp, cl+ssl, cl-abstract-classes, cl-all, cl-async, cl-collider, cl-conllu, cl-cron, cl-digraph, cl-egl, cl-gap-buffer, cl-generator, cl-generic-arithmetic, cl-grace, cl-hamcrest, cl-las, cl-ledger, cl-locatives, cl-markless, cl-messagepack, cl-ntriples, cl-patterns, cl-prevalence, cl-proj, cl-project, cl-qrencode, cl-random-forest, cl-stopwatch, cl-string-complete, cl-string-match, cl-tcod, cl-wayland, clad, clem, clod, closer-mop, clx-xembed, coleslaw, common-lisp-actors, croatoan, dartsclhashtree, data-lens, defrec, doplus, doubly-linked-list, dynamic-collect, eclector, escalator, external-program, fiasco, flac-parser, game-math, gamebox-dgen, gamebox-math, gendl, generic-cl, genie, golden-utils, helambdap, interface, ironclad, jp-numeral, json-responses, l-math, letrec, lisp-chat, listopia, literate-lisp, maiden, map-set, mcclim, mito, nodgui, overlord, parachute, parameterized-function, pathname-utils, periods, petalisp, pjlink, plump, policy-cond, portable-threads, postmodern, protest, qt-libs, qtools, qtools-ui, recur, regular-type-expression, rove, serapeum, shadow, simplified-types, sly, spinneret, staple, stumpwm, sucle, synonyms, tagger, template, trivia, trivial-battery, trivial-benchmark, trivial-signal, trivial-utilities, ubiquitous, umbra, usocket, varjo, vernacular, with-c-syntax.

Removed projects: mgl, mgl-mat.

To get this update, use: (ql:update-dist "quicklisp")

Enjoy!

07 Mar 2019 2:52pm GMT

27 Feb 2019

feedPlanet Lisp

Lispers.de: Lisp-Meetup in Hamburg on Monday, 4th March 2019

We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 4th March 2019.

This is an informal gathering of Lispers. Come as you are, bring lispy topics.

27 Feb 2019 11:26am GMT

04 Nov 2018

feedPlanet Sun

5 Budget-Friendly Telescopes You Can Choose For Viewing Planets

Socrates couldn't have been more right when he said: "I know one thing, that I know nothing." Even with all of the advancements we, as a species, have made in this world, it's still nothing compared to countless of wonders waiting to be discovered in the vast universe. If you've recently developed an interest in […]

04 Nov 2018 1:27pm GMT

28 Jun 2015

feedPlanet Sun

PicoChess 0.43 released

This is just a short hint for all fans of chess programs. PicoChess 0.43 has been released. Announced by J. Precour from ascent ag. If you are interested in chess and picochess, please visit PicoChess by LocutusOfPenguin. Home of a dedicated chess computer based on tiny ARM computers in conjunction with the DGT e-board. Go […]

28 Jun 2015 11:02pm GMT

20 May 2012

feedPlanet Sun

Annular Solar Eclipse on Sunday, May 20th 2012

On Sunday, May 20th 2012, people in a narrow strip from Japan to the western United States will be able to see an annular solar eclipse, the first in 18 years. The moon will cover as much as 94% of the sun. An Annular Solar Eclipse is different from a Total Solar Eclipse, when the […]

20 May 2012 9:51pm GMT

10 Nov 2011

feedPlanetJava

OSDir.com - Java: Oracle Introduces New Java Specification Requests to Evolve Java Community Process

From the Yet Another dept.:

To further its commitment to the Java Community Process (JCP), Oracle has submitted the first of two Java Specification Requests (JSRs) to update and revitalize the JCP.

10 Nov 2011 6:01am GMT

OSDir.com - Java: No copied Java code or weapons of mass destruction found in Android

From the Fact Checking dept.:

ZDNET: Sometimes the sheer wrongness of what is posted on the web leaves us speechless. Especially when it's picked up and repeated as gospel by otherwise reputable sites like Engadget. "Google copied Oracle's Java code, pasted in a new license, and shipped it," they reported this morning.



Sorry, but that just isn't true.

10 Nov 2011 6:01am GMT

OSDir.com - Java: Java SE 7 Released

From the Grande dept.:

Oracle today announced the availability of Java Platform, Standard Edition 7 (Java SE 7), the first release of the Java platform under Oracle stewardship.

10 Nov 2011 6:01am GMT

08 Nov 2011

feedfosdem - Google Blog Search

papupapu39 (papupapu39)'s status on Tuesday, 08-Nov-11 00:28 ...

papupapu39 · http://identi.ca/url/56409795 #fosdem #freeknowledge #usamabinladen · about a day ago from web. Help · About · FAQ · TOS · Privacy · Source · Version · Contact. Identi.ca is a microblogging service brought to you by Status.net. ...

08 Nov 2011 12:28am GMT

05 Nov 2011

feedfosdem - Google Blog Search

Write and Submit your first Linux kernel Patch | HowLinux.Tk ...

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the world. ...

05 Nov 2011 1:19am GMT

03 Nov 2011

feedfosdem - Google Blog Search

Silicon Valley Linux Users Group – Kernel Walkthrough | Digital Tux

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the ...

03 Nov 2011 3:45pm GMT

28 Oct 2011

feedPlanet Ruby

O'Reilly Ruby: MacRuby: The Definitive Guide

Ruby and Cocoa on OS X, the iPhone, and the Device That Shall Not Be Named

28 Oct 2011 8:00pm GMT

14 Oct 2011

feedPlanet Ruby

Charles Oliver Nutter: Why Clojure Doesn't Need Invokedynamic (Unless You Want It to be More Awesome)

This was originally posted as a comment on @fogus's blog post "Why Clojure doesn't need invokedynamic, but it might be nice". I figured it's worth a top-level post here.

Ok, there's some good points here and a few misguided/misinformed positions. I'll try to cover everything.

First, I need to point out a key detail of invokedynamic that may have escaped notice: any case where you must bounce through a generic piece of code to do dispatch -- regardless of how fast that bounce may be -- prevents a whole slew of optimizations from happening. This might affect Java dispatch, if there's any argument-twiddling logic shared between call sites. It would definitely affect multimethods, which are using a hand-implemented PIC. Any case where there's intervening code between the call site and the target would benefit from invokedynamic, since invokedynamic could be used to plumb that logic and let it inline straight through. This is, indeed, the primary benefit of using invokedynamic: arbitrarily complex dispatch logic folds away allowing the dispatch to optimize as if it were direct.

Your point about inference in Java dispatch is a fair one...if Clojure is able to infer all cases, then there's no need to use invokedynamic at all. But unless Clojure is able to infer all cases, then you've got this little performance time bomb just waiting to happen. Tweak some code path and obscure the inference, and kablam, you're back on a slow reflective impl. Invokedynamic would provide a measure of consistency; the only unforeseen perf impact would be when the dispatch turns out to *actually* be polymorphic, in which case even a direct call wouldn't do much better.

For multimethods, the benefit should be clear: the MM selection logic would be mostly implemented using method handles and "leaf" logic, allowing hotspot to inline it everywhere it is used. That means for small-morphic MM call sites, all targets could potentially inline too. That's impossible without invokedynamic unless you generate every MM path immediately around the eventual call.

Now, on to defs and Var lookup. Depending on the cost of Var lookup, using a SwitchPoint-based invalidation plus invokedynamic could be a big win. In Java 7u2, SwitchPoint-based invalidation is essentially free until invalidated, and as you point out that's a rare case. There would essentially be *no* cost in indirecting through a var until that var changes...and then it would settle back into no cost until it changes again. Frequently-changing vars could gracefully degrade to a PIC.

It's also dangerous to understate the impact code size has on JVM optimization. The usual recommendation on the JVM is to move code into many small methods, possibly using call-through logic as in multimethods to reuse the same logic in many places. As I've mentioned, that defeats many optimizations, so the next approach is often to hand-inline logic everywhere it's used, to let the JVM have a more optimizable view of the system. But now we're stepping on our own feet...by adding more bytecode, we're almost certainly impacting the JVM's optimization and inlining budgets.

OpenJDK (and probably the other VMs too) has various limits on how far it will go to optimize code. A large number of these limits are based on the bytecoded size of the target methods. Methods that get too big won't inline, and sometimes won't compile. Methods that inline a lot of code might not get inlined into other methods. Methods that inline one path and eat up too much budget might push out more important calls later on. The only way around this is to reduce bytecode size, which is where invokedynamic comes in.

As of OpenJDK 7u2, MethodHandle logic is not included when calculating inlining budgets. In other words, if you push all the Java dispatch logic or multimethod dispatch logic or var lookup into mostly MethodHandles, you're getting that logic *for free*. That has had a tremendous impact on JRuby performance; I had previous versions of our compiler that did indeed infer static target methods from the interpreter, but they were often *slower* than call site caching solely because the code was considerably larger. With invokedynamic, a call is a call is a call, and the intervening plumbing is not counted against you.

Now, what about negative impacts to Clojure itself...

#0 is a red herring. JRuby supports Java 5, 6, and 7 with only a few hundred lines of changes in the compiler. Basically, the compiler has abstract interfaces for doing things like constant lookup, literal loading, and dispatch that we simply reimplement to use invokedynamic (extending the old non-indy logic for non-indified paths). In order to compile our uses of invokedynamic, we use Rémi Forax's JSR-292 backport, which includes a "mock" jar with all the invokedynamic APIs stubbed out. In our release, we just leave that library out, reflectively load the invokedynamic-based compiler impls, and we're off to the races.

#1 would be fair if the Oracle Java 7u2 early-access drops did not already include the optimizations that gave JRuby those awesome numbers. The biggest of those optimizations was making SwitchPoint free, but also important are the inlining discounting and MutableCallSite improvements. The perf you see for JRuby there can apply to any indirected behavior in Clojure, with the same perf benefits as of 7u2.

For #2, to address the apparent vagueness in my blog post...the big perf gain was largely from using SwitchPoint to invalidate constants rather than pinging a global serial number. Again, indirection folds away if you can shove it into MethodHandles. And it's pretty easy to do it.

#3 is just plain FUD. Oracle has committed to making invokedynamic work well for Java too. The current thinking is that "lambda", the support for closures in Java 7, will use invokedynamic under the covers to implement "function-like" constructs. Oracle has also committed to Nashorn, a fully invokedynamic-based JavaScript implementation, which has many of the same challenges as languages like Ruby or Python. I talked with Adam Messinger at Oracle, who explained to me that Oracle chose JavaScript in part because it's so far away from Java...as I put it (and he agreed) it's going to "keep Oracle honest" about optimizing for non-Java languages. Invokedynamic is driving the future of the JVM, and Oracle knows it all too well.

As for #4...well, all good things take a little effort :) I think the effort required is far lower than you suspect, though.

14 Oct 2011 2:40pm GMT

07 Oct 2011

feedPlanet Ruby

Ruby on Rails: Rails 3.1.1 has been released!

Hi everyone,

Rails 3.1.1 has been released. This release requires at least sass-rails 3.1.4

CHANGES

ActionMailer

ActionPack

ActiveModel

ActiveRecord

ActiveResource

ActiveSupport

Railties

SHA-1

You can find an exhaustive list of changes on github. Along with the closed issues marked for v3.1.1.

Thanks to everyone!

07 Oct 2011 5:26pm GMT

26 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Update your RSS link

If you see this message in your RSS reader, please correct your RSS link to the following URL: http://fosdem.org/rss.xml.

26 Jul 2008 5:55am GMT

25 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Archive of FOSDEM 2008

These pages have been archived.
For information about the latest FOSDEM edition please check this url: http://fosdem.org

25 Jul 2008 4:43pm GMT

09 Mar 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Slides and videos online

Two weeks after FOSDEM and we are proud to publish most of the slides and videos from this year's edition.

All of the material from the Lightning Talks has been put online. We are still missing some slides and videos from the Main Tracks but we are working hard on getting those completed too.

We would like to thank our mirrors: HEAnet (IE) and Unixheads (US) for hosting our videos, and NamurLUG for quick recording and encoding.

The videos from the Janson room were live-streamed during the event and are also online on the Linux Magazin site.

We are having some synchronisation issues with Belnet (BE) at the moment. We're working to sort these out.

09 Mar 2008 3:12pm GMT