25 Apr 2017

feedPlanet Debian

Keith Packard: TeleMini3

TeleMini V3.0 Dual-deploy altimeter with telemetry now available

TeleMini v3.0 is an update to our original TeleMini v1.0 flight computer. It is a miniature (1/2 inch by 1.7 inch) dual-deploy flight computer with data logging and radio telemetry. Small enough to fit comfortably in an 18mm tube, this powerful package does everything you need on a single board:

I don't have anything in these images to show just how tiny this board is-but the spacing between the screw terminals is 2.54mm (0.1in), and the whole board is only 13mm wide (1/2in).

This was a fun board to design. As you might guess from the version number, we made a couple prototypes of a version 2 using the same CC1111 SoC/radio part as version 1 but in the EasyMini form factor (0.8 by 1.5 inches). Feed back from existing users indicated that bigger wasn't better in this case, so we shelved that design.

With the availability of the STM32F042 ARM Cortex-M0 part in a 4mm square package, I was able to pack that, the higher power CC1200 radio part, a 512kB memory part and a beeper into the same space as the original TeleMini version 1 board. There is USB on the board, but it's only on some tiny holes, along with the cortex SWD debugging connection. I may make some kind of jig to gain access to that for configuration, data download and reprogramming.

For those interested in an even smaller option, you could remove the screw terminals and battery connector and directly wire to the board, and replace the beeper with a shorter version. You could even cut the rear mounting holes off to make the board shorter; there are no components in that part of the board.

25 Apr 2017 4:01pm GMT

Daniel Pocock: FSFE Fellowship Representative, OSCAL'17 and other upcoming events

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.

I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.

Please consider becoming an FSFE fellow or donor

The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.

Attending OSCAL'17, Tirana

During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?

Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.

In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

25 Apr 2017 12:57pm GMT

Tim Retout: Packet.net arm64 servers

Packet.net offer an ARMv8 server with 96 cores for $0.50/hour. I signed up and tried building Libreoffice to see what would happen. Debian isn't officially supported there yet, but they offer Ubuntu, which suffices for testing the hardware.

Screenshot of htop showing one core in use and 95 idle.

Final build time: around 12 hours, compared to 2hr 55m on the official arm64 buildd.

Most of the Libreoffice build appeared to consist of "touch /some/file" repeated endlessly - I have a suspicion that the I/O performance might be low on this server (although I have no further evidence to offer for this). I think the next thing to try is building on a tmpfs, because the server has 128GB RAM available, and it's a shame not to use it.

25 Apr 2017 12:38pm GMT

Wouter Verhelst: Removing git-lfs

Git is cool, for reasons I won't go into here.

It doesn't deal very well with very large files, but that's fine; when using things like git-annex or git-lfs, it's possible to deal with very large files.

But what if you've added a file to git-lfs which didn't need to be there? Let's say you installed git-lfs and told it to track all *.zip files, but then it turned out that some of those files were really small, and that the extra overhead of tracking them in git-lfs is causing a lot of grief with your users. What do you do now?

With git-annex, the solution is simple: you just run git annex unannex <filename>, and you're done. You may also need to tell the assistant to no longer automatically add that file to the annex, but beyond that, all is well.

With git-lfs, this works slightly differently. It's not much more complicated, but it's not documented in the man page. The naive way to do that would be to just run git lfs untrack; but when I tried that it didn't work. Instead, I found that the following does work:

25 Apr 2017 9:56am GMT

Reproducible builds folks: Reproducible Builds: week 104 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday April 16 and Saturday April 22 2017:

Upcoming events

Reproducible work in other projects

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Chris West:

Reviews of unreproducible packages

37 package reviews have been added, 64 have been updated and 16 have been removed in this week, adding to our knowledge about identified issues.

One issue type has been updated:

Two issue types have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

diffoscope development

Misc.

This week's edition was written by Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

25 Apr 2017 7:38am GMT

24 Apr 2017

feedPlanet Debian

Mike Gabriel: Making Debian experimental's X2Go Server Packages available on Ubuntu, Mint and alike

Often I get asked: How can I test the latest nx-libs packages [1] with a stable version of X2Go Server [2] on non-Debian, but Debian-like systems (e.g. Ubuntu, Mint, etc.)?

This is quite easy, if you are not scared of building binary Debian packages from Debian source packages. Until X2Go Server (and NXv3) will be made available in Debian unstable, the brave testers should follow the below installation recipe.

Step 1: Add Debian experimental as Source Package Source

Add Debian experimental as source package provider (and immediately install the Debian Archive Keyring package):

$ echo "deb-src http://httpredir.debian.org/debian experimental main" | sudo tee /etc/apt/sources.list.d/debian-experimental.list
$ sudo apt-get update
$ sudo apt-get install debian-archive-keyring
$ sudo apt-get update

Step 2: Obtain Build Tools and Build Dependencies

When building software, you need to have some extra packages. Those packages will not be needed at runtime of the built piece of software, so you may want to take some notes on what extra packages get installed with the below step. If you plan rebuilding X2Go Server and NXv3 several times, then simply leave the build dependencies installed:

$ sudo apt-get build-dep nx-libs
$ sudo apt-get build-dep x2goserver

Step 3: Build NXv3 and X2Go Server from Source

Building NXv3 (aka nx-libs) takes a while, so it may be time to get some coffee now... The build process should not run as superuser root. Stay with your normal user account.

$ mkdir Development/ && cd Development/
$ apt-get source -b nx-libs

[... enjoy your coffee, there'll be much output on your screen... ]

$ apt-get source -b x2goserver

In your working directoy, you should now find various new files ending with .deb.

Step 4: Install the built packages

These .deb files we will now install. It does not hurt to simply install all of them:

sudo dpkg -i *.deb

The above command might result in some error messages. Ignore them, you can easily fix them by installing the missing runtime dependencies:

sudo apt-get install -f

Play it again, Sam

If you want to re-do the above with some new nx-libs or x2goserver source package version, simply create an empty folder and repeat those steps above. The dpkg command will install the .deb files over the currently installed package versions and update your system with your latest build.

The disadvantage of this build-from-source approach is (it is a temporary recommendation until X2Go Server & co. have landed in Debian unstable), that you have to check for updates manually from time to time.

All X2Go compoments are packaged by the still quite fresh Debian Remote Packaging Maintainers team, you may want to visit the DDPO page of our team: https://qa.debian.org/developer.php?login=pkg-remote-team%40lists.alioth...

Recommended versions

For X2Go Server, the 4.0.1.x release series is considerably stable. The version shipped with Debian has been patched to work with the upcoming nx-libs 3.6.x series, but also tolerates the older 3.5.0.x series as shipped with X2Go's upstream packages.

For NXv3 (aka nx-libs) we recommend using (thus, waiting for) the 3.5.99.6 release. The package has been uploaded to Debian experimental already, but waits in Debian NEW for some minor ftp-master ACK (we added one binary package with the recent upload).

Updates

References

24 Apr 2017 2:48pm GMT

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.6)

Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Friday, Apr 21st 2017, version 3.5.99.6 of nx-libs has been released [1].

As some of you might have noticed, the release announcements for 3.5.99.4 and 3.5.99.5 have never been posted / written, so this announcement lists changes introduced since 3.5.99.3.

Credits

There are alway many people to thank, so I won't mention all here. The person I need to mention here is Mihai Moldovan, though. He virtually is our QA manager, although not officially entitled. The feedback he gives on code reviews is sooo awesome!!! May you be available to our project for a long time. Thanks a lot, Mihai!!!

Changes between 3.5.99.4 and 3.5.99.3

Changes between 3.5.99.5 and 3.5.99.4

Changes between 3.5.99.6 and 3.5.99.5

Change Log

Lists of changes (since 3.5.99.3) can be obtained from here (3.5.99.3 -> .4), here (3.5.99.4 -> .5) and here (3.5.99.5 -> .6)

Known Issues

A list of known issues can be obtained from the nx-libs issue tracker [issues].

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). The nxagent Xserver can be used from remote sessions (via nxcomp compression library) or as a next Xserver.

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. At the moment, you can obtain nx-libs builds for Ubuntu 16.10 (yakkety) and 17.04 (zenial) as nightly builds.

References

24 Apr 2017 2:12pm GMT

Norbert Preining: Leaving for BachoTeX 2017

Tomorrow we are leaving for TUG 2017 @ BachoTeX, one of the most unusual and great series of conferences (BachoTeX) merged with the most important series of TeX conference (TUG). I am looking forward to this trip and to see all the good friends there.

And having the chance to visit my family at the same time in Vienna makes this trip, how painful the long flight with our daughter will be, worth it.

See you in Vienna and Bachotek!

PS: No pun intended with the photo-logo combination, just a shot of the great surroundings in Bachotek 😉

24 Apr 2017 2:20am GMT

23 Apr 2017

feedPlanet Debian

Mark Brown: Bronica Motor Drive SQ-i

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn't find any documentation at all about it on the internet and had to figure it out for myself I figured I'd put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I'll be able to get someone who knows what they're doing to tell me how the thing really works.

Bottom plate

The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There's also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached.

Bottom plate with battery compartment visible

On the back of the base of the camera there's a button with a red LED next to it which illuminates slightly when the button is pressed (it's visible in low light only). I'm not 100% sure what this is for, I'd have guessed a battery check if the light were easier to see.

Top of drive

On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor - depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots.

Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

23 Apr 2017 1:17pm GMT

Andreas Metzler: balance sheet snowboarding season 2016/17

Another year of minimal snow. Again there was early snowfall in the mountains at the start of November, but the snow was gone soon again. There was no snow up to 2000 meters of altitude until about January 3. Christmas week was spent hiking up and taking the lift down.

I had my first day on board on January 6 on artificial snow, and the first one on natural snow on January 19. Down where I live (800m), snow was tight the whole winter, never topping 1m. Measuring station Diedamskopf at 1800m above sea-level topped at slightly above 200cm, on April 19. Last boarding day was yesterday (April 22) in Warth with hero-conditions.

I had a preopening on the glacier in Pitztal at start of November with Pure Boarding. However due to the long waiting-period between pre-opening and start of season it did not pay off. By the time I rode regularily I had forgotten almost everything I learned at Carving-School.

Nevertheless I strong season due to long periods on stable, sunny weather with 30 days on piste (counting the day I went up and barely managed a single blind run in superdense fog).

Anyway, here is the balance-sheet:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17
number of (partial) days 25 17 29 37 30 30 25 23 30 24 17 30
Damüls 10 10 5 10 16 23 10 4 29 9 4 4
Diedamskopf 15 4 24 23 13 4 14 19 1 13 12 23
Warth/Schröcken 0 3 0 4 1 3 1 0 0 2 1 3
total meters of altitude 124634 74096 219936 226774 202089 203918 228588 203562 274706 224909 138037 269819
highscore 10247m 8321m 12108m 11272m 11888m 10976m 13076m 13885m 12848m 13278 11015 12245
# of runs 309 189 503 551 462 449 516 468 597 530 354 634

23 Apr 2017 12:03pm GMT

22 Apr 2017

feedPlanet Debian

Enrico Zini: Splitting a git-annex repository

I have a git annex repo for all my media that has grown to 57866 files and git operations are getting slow, especially on external spinning hard drives, so I decided to split it into separate repositories.

This is how I did it, with some help from #git-annex. Suppose the old big repo is at ~/oldrepo:

# Create a new repo for photos only
mkdir ~/photos
cd photos
git init
git annex init laptop

# Hardlink all the annexed data from the old repo
cp -rl ~/oldrepo/.git/annex/objects .git/annex/

# Regenerate the git annex metadata
git annex fsck --fast

# Also split the repo on the usb key
cd /media/usbkey
git clone ~/photos
cd photos
git annex init usbkey
cp -rl ../oldrepo/.git/annex/objects .git/annex/
git annex fsck --fast

# Connect the annexes as remotes of each other
git remote add laptop ~/photos
cd ~/photos
git remote add usbkey /media/usbkey

At this point, I went through all repos doing standard cleanup:

# Remove unneeded hard links
git annex unused
git annex dropunused --force 1-12345

# Sync
git annex sync

To make sure nothing is missing, I used git annex find --not --in=here to see if, for example, the usbkey that should have everything could be missing some thing.

Update: Antoine Beaupré pointed me to this tip about Repositories with large number of files which I will try next time one of my repositories grows enough to hit a performance issue.

22 Apr 2017 6:48pm GMT

Manuel A. Fernandez Montecelo: Debian GNU/Linux port for RISC-V 64-bit (riscv64)

This is a post describing my involvement with the Debian GNU/Linux port for RISC-V (unofficial and not endorsed by Debian at the moment) and announcing the availability of the repository (still very much WIP) with packages built for this architecture.

If not interested in the story but you want to check the repository, just jump to the bottom.

Roots

A while ago, mostly during 2014, I was involved in the Debian port for OpenRISC (or1k) ─ about which I posted (by coincidence) exactly 2 years ago.

The two of us working on the port stopped in August or September of that year, after knowing that the copyright of the code to add support for this architecture in GCC would not be assigned to the FSF, so it would never be added to GCC upstream ─ unless the original authors changed their mind (which they didn't) or there was a clean-room reimplementation (which didn't happen so far).

But a few other key things contributed to the decision to stop working on that port, which bear direct relationship to this story.

One thing that particularly influenced me to stop working on it was a sense of lack of purpose, all things considered, for the OpenRISC port that we were working on.

For example, these chips are sometimes used as part of bigger devices by Samsung to control or wake up other chips; but it was not clear whether there would ever be devices with OpenRISC as the main chip, specially in devices powerful enough to run Linux or similar kernels, and Debian on top. One can use FPGAs to synthesise OpenRISC or1k, but these are slow, and expensive when using lots of memory.

Without prospects of having hardware easily available to users, there's not much point in having a whole Debian port ready to run on hardware that never comes to be.

Yeah, sure, it's fun to create such a port, but it's tons of work to maintain and keep it up to-date forever, and with close to zero users it's very unrewarding.

Another thing that contributed to decide to stop is that, at least in my opinion, 32-bit was not future-proof enough for general purpose computing, specially for new devices and ports starting to take off on that time and age. There was some incipient work to create another OpenRISC design for 64-bits, but it was still in an early phase.

My secret hope and ultimate goal was to be able to run as much a free-ish computer as possible as my main system. Still today many people are buying and using 32-bit devices, like small boards; but very few use it as their main computer or servers for demanding workloads or complex services. So for me, even if feasible if one is very austere and dedicated, OpenRISC or1k failed that test.

And lastly, another thing happened at the time...

Enter RISC-V

In August 2014, at the point when we were fully acknowledging the problem of upstreaming (or rather, lack thereof) the support for OpenRISC in GCC, RISC-V was announced to the world, bringing along papers with suggestive titles such as "Instruction Sets Should Be Free: The Case For RISC-V" (pdf) and articles like "RISC-V: An Open Standard for SoCs - The case for an open ISA" in EE Times.

RISC-V (as the previous RISC-n marks) had been designed (or rather, was being designed, because it was and is an as yet unfinished standard) by people from UC Berkeley, including David Patterson, the pioneer of RISC computer designs and co-author of the seminal book "Computer Architecture: A Quantitative Approach". Other very capable people are also also leading the project, doing the design and legwork to make it happen ─ see the list of contributors.

But, apart from throwing names, the project has many other merits.

Similarly to OpenRISC, RISC-V is an open instruction set architecture (ISA), but with the advantage of being designed in more recent times (thus avoiding some mistakes and optimising for problems discovered more recently, as technology evolves); with more resources; with support for instruction widths of 32, 64 and even 128 bits; with a clean set of standard but optional extensions (atomics, float, double, quad, vector, ...); and with reserved space to add new extensions in ordered and compatible ways.

In the view of some people in the OpenRISC community, this unexpected development in a way made irrelevant the ongoing update of OpenRISC for 64-bits, and from what I know and for better or worse, all efforts on that front stopped.

Also interestingly (if nothing else, for my purposes of running as much a free system as possible), was that the people behind RISC-V had strong intentions to make it useful to create modern hardware, and were pushing heavily towards it from the beginning.

And together with this announcement or soonly after, it came the promise of free-ish hardware in the form of the lowRISC project. Although still today it seems to be a long way from actually shipping hardware, at least there was some prospect of getting it some day.

On top of all that, about the freedom aspect, both the Berkeley and lowRISC teams engaged since very early on with the free hardware community, including attending and presenting at OpenRISC events; and lowRISC intended to have as much free hardware as possible in their planned SoC.

Starting to work on the Debian port, this time for RISC-V

So in late 2014 I slowly started to prepare the Debian port, switching on and off.

The Userland spec was frozen in 2014 just before the announcement, the Privilege one is still not frozen today, so there was no need to rush.

There were plans to upstream the support in the toolchain for RISC-V (GNU binutils, glibc and GCC; Linux and other useful software like Qemu) in 2015, but sadly these did not materialise until late 2016 and 2017. One of the main reasons for the delay was due to the slowness to sort out the copyright assignment of the code to the FSF (again). Still today, only binutils and GCC are upstreamed, and Linux and glibc depend on the Privilege spec being finished, so it will take a while.

After the experience with OpenRISC and the support in GCC, I didn't want to invest too much time, lest it all became another dead-end due to lack of upstreaming ─ so I was just cross-compiling here and there, testing Qemu (which still today is very limited for this architecture, e.g. no network support and very limited character and block devices) and trying to find and report bugs in the implementations, and send patches (although I did not contribute much in the end).

Incompatible changes in the toolchain

In terms of compiling packages and building-up a repository, things were complicate, and less mature and stable than the OpenRISC ones were even back in 2014.

In theory, with the Userland spec being frozen, regular programs (below the Operating System level) compiled at any time could have run today; but in practice what happened is that there were several rounds of profound ─or, at least, disrupting─ changes in the toolchain before and while being upstreamed, which made the binary packages that I had built to not work at all (changes in dynamic loader, registers where arguments are stored when jumping functions, etc.).

These major breakages happened several times already, and kind of unexpectedly ─ at least for the people not heavily involved in the implementation.

When the different pieces are upstreamed it is expected that these breakages won't happen; but still there's at least the fundamental bit of glibc, which will probably change things once again in incompatible ways before or while being upstreamed.

Outside Debian but within the FOSS / Linux world, the main project that I know of is that some people from Fedora also started a port in mid 2016 and did great advances, but ─from what I know─ they put the project in the freezer in late 2016 until all such problems are resolved ─ they don't want to spend time rebootstrapping again and again.

What happened recently on the Debian front

In early 2016 I created the page for RISC-V in the Debian wiki, expecting that things were at last fully stable and the important bits of the toolchain upstreamed during that year ─ I was too optimistic.

Some other people (including Debian folks) have been contributing for a while, in the wiki, mailing lists and IRC channels, and in the RISC-V project mailing lists ─ you will see their names everywhere.

However, due to the combination of lack of hardware, software not upstreamed and shortcomings of emulators (chiefly Qemu) make contributions hard and very tedious, nothing much happened recently visible to the outside world in terms of software.

The private repository-in-the-making

In late 2015 and beginning of 2016, having some free time in my hands and expecting that all things would coalesce quickly, I started to build a repository of binary packages in a more systematic way, with most of the basic software that one can expect in a basic Debian system (including things common to all Linux systems, and also specific Debian software like dpkg or apt, and even aptitude!).

After that I also built many others outside the basic system (more than 1000 source packages and 2000 or 3000 arch-dependent binary packages in total), specially popular libraries (e.g. boost, gtk+ version 2 and 3), interpreters (several versions of lua, perl and python, also version 2 and 3) and in general packages that are needed to build many other packages (like doxygen). Unfortunately, some of these most interesting packages do not compile cleanly (more because of obscure or silly errors than proper porting), so they are not included at the moment.

I intentionally avoided trying to compile thousands of packages in the archive which would be of nobody's use at this point; but many more could be compiled without much effort.

About the how, initially I started cross-compiling and using rebootstrap, which was of great help in the beginning. Some of the packages that I cross-compiled had bugs that I did not know how to debug without a "live" and "native" (within emulators) system, so I tried to switch to "natively" built packages very early on. For that I needed many packages built natively (like doxygen or cmake) which would be unnecessary if I remained cross-compiling ─ the host tools would be used in that case.

But this also forced me to eat my own dog food, which even if much slower and tedious, it was on the whole a more instructive experience; and above all, it helped to test and make sure that the the tools and the whole stack was working well enough to build hundreds of packages.

Why the repository-in-the-making was private

Until now I did not attempt to make the repository available on-line, for several reasons.

First because it would be kind of useless to publish files that were not working or would soon not work, due to the incompatible changes in the toolchain, rendering many or most of the packages built useless. And because, for many months now, I expected that things would stabilise and to have something stable "really soon now" ─ but this didn't happen yet.

Second because of lack of resources and time since mid 2016, and because I got some packages only compiled thanks to (mostly small and unimportant, but undocumented and unsaved) hacks, often working around temporary bugs and thus not worth sending upstream; but I couldn't share the binaries without sharing the full source and fulfill licenses like the GNU GPL. I did a new round of clean rebuilds in the last few weeks, just finished, the result is close to 1000 arch-dependent packages.

And third, because of lack of demand. This changed in the last few weeks, when other people started to ask me to share the results even if incomplete or not working properly (I had one request in the past, but couldn't oblige in time at the time).

Finally, the work so far: repository now on-line

So finally, with the great help from Kurt Keville from MIT, and Bytemark sponsoring a machine where most of the packages were built, here we have the repository:

The lines for /etc/apt/sources.list are:

 deb [ arch=riscv64 signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main
 deb-src [ signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main

The repository is signed with my key as Debian Developer, contained in the file /usr/share/keyrings/debian-keyring.gpg, which is part of the package debian-keyring (available from Debian and derivatives).

WARNING!!

This repository, though, is very much WIP, incomplete (some package dependencies cannot be fulfilled, and it's only a small percentage of the Debian archive, not trying to be comprehensive at the moment) and probably does not work at all in your system at this point, for the following reasons:

The combination of all these shortcomings is specially unfortunate, because without glibc provided it will be difficult that you can get the binaries to run at all; but there are some packages that are arch-dependent but not too tied to libc or the dynamic loader will not be affected.

At least you can try one the few static packages present in Debian, like the one in the package bash-static. When one removes moving parts like the dynamic loader and libc, since the basic machine instructions are stable for several years now, it should work, but I wouldn't discard some dark magic that prevents even static binaries from working.

Still, I hope that the respository in its current state is useful to some people, at least for those who requested it. If one has the environment set-up, it's easy to unpack the contents of the .deb files and try out the software (which often is not trivial or very slow to compile, or needs lots of dependencies to be built first first).

... and finally, even if not useful at all for most people at the moment, by doing this I also hope that efforts like this spark your interest to contribute to free software, free hardware, or both! :-)

22 Apr 2017 2:00am GMT

21 Apr 2017

feedPlanet Debian

Ritesh Raj Sarraf: Indian Economy

This has gotten me finally ask the question

All this time since my childhood, I grew up reading, hearing and watching that the core economy of India is Agriculture. And that it needs the highest bracket in the budgets of the country. It still applies today. Every budget has special waivers for the agriculture sector, typically in hundreds of thousands of crores in India Rupees. The most recent to mention is INR 27420 Crores waived off for just a single state (Uttar Pradesh), as was promised by the winning party during their campaign.

Wow. Quick search yields that I am not alone to notice this. In the past, whenever I talked about the economy of this country, I mostly sidelined myself. Because I never studied here. And neither did I live here much during my childhood or teenage days. Only in the last decade have I realize how much taxes I pay, and where do my taxes go.

I do see a justification for these loan waivers though. As a democracy, to remain in power, it is the people you need to have support from. And if your 1.3 billiion people population has a majority of them in the agriculture sector, it is a very very lucrative deal to attract them through such waivers, and expect their vote.

Here's another snippet from Wikipedia on the same topic:

Agricultural Debt Waiver and Debt Relief Scheme[edit]

On 29 February 2008, P. Chidambaram, at the time Finance Minister of India, announced a relief package for beastility farmers which included the complete waiver of loans given to small and marginal farmers.[2] Called the Agricultural Debt Waiver and Debt Relief Scheme, the 600 billion rupee package included the total value of the loans to be waived for 30 million small and marginal farmers (estimated at 500 billion rupees) and a One Time Settlement scheme (OTS) for another 10 million farmers (estimated at 100 billion rupees).[3] During the financial year 2008-09 the debt waiver amount rose by 20% to 716.8 billion rupees and the overall benefit of the waiver and the OTS was extended to 43 million farmers.[4] In most of the Indian States the number of small and marginal farmers ranges from 70% to 94% of the total number of farmers

And not to forget how many people pay taxes in India. To quote an unofficial statement from an Indian Media House

Only about 1 percent of India's population paid tax on their earnings in the year 2013, according to the country's income tax data, published for the first time in 16 years.

The report further states that a total of 28.7 million individuals filed income tax returns, of which 16.2 million did not pay any tax, leaving only about 12.5 million tax-paying individuals, which is just about 1 percent of the 1.23 billion population of India in the year 2013.

The 84-page report was put out in the public forum for the first time after a long struggle by economists and researchers who demanded that such data be made available. In a press release, a senior official from India's income tax department said the objective of publishing the data is to encourage wider use and analysis by various stakeholders including economists, students, researchers and academics for purposes of tax policy formulation and revenue forecasting.

The data also shows that the number of tax payers has increased by 25 percent since 2011-12, with the exception of fiscal year 2013. The year 2014-15 saw a rise to 50 million tax payers, up from 40 million three years ago. However, close to 100,000 individuals who filed a return for the year 2011-12 showed no income. The report brings to light low levels of tax collection and a massive amount of income inequality in the country, showing the rich aren't paying enough taxes.

Low levels of tax collection could be a challenge for the current government as it scrambles for money to spend on its ambitious plans in areas such as infrastructure and science & technology. Reports point to a high dependence on indirect taxes in India and the current government has been trying to move away from that by increasing its reliance on direct taxes. Official data show that the dependence has come down from 5.93 percent in 2008-09 to 5.47 percent in 2015-16.

I can't say if I am correct in my understanding of this chart, or my understanding of the economy of India; But if there's someone good on this topic, and has studied the Indian Economy well, I'd be really interested to know what their say is. Because, otherwise, from my own interpretation on the subject, I don't see the day far away when this economy will plummet

PS: Image source Wikipedia https://upload.wikimedia.org/wikipedia/commons/2/2e/1951_to_2013_Trend_C...

Categories:

Keywords:

Like:

21 Apr 2017 6:33pm GMT

Joachim Breitner: veggies: Haskell code generation from scratch

How hard it is to write a compiler for Haskell Core? Not too hard, actually!

I wish we had a formally verified compiler for Haskell, or at least for GHC's intermediate language Core. Now formalizing that part of GHC itself seems to be far out of reach, with the many phases the code goes through (Core to STG to CMM to Assembly or LLVM) and optimizations happening at all of these phases and the many complicated details to the highly tuned GHC runtime (pointer tagging, support for concurrency and garbage collection).

Introducing Veggies

So to make that goal of a formally verified compiler more feasible, I set out and implemented code generation from GHC's intermediate language Core to LLVM IR, with simplicity as the main design driving factor.

You can find the result in the GitHub repository of veggies (the name derives from "verifiable GHC"). If you clone that and run ./boot.sh some-directory, you will find that you can use the program some-directory/bin/veggies just like like you would use ghc. It comes with the full base library, so your favorite variant of HelloWorld might just compile and run.

As of now, the code generation handles all the Core constructs (which is easy when you simply ignore all the types). It supports a good number of primitive operations, including pointers and arrays - I implement these as need - and has support for FFI calls into C.

Why you don't want to use Veggies

Since the code generator was written with simplicity in mind, performance of the resulting code is abysmal: Everything is boxed, i.e. represented as pointer to some heap-allocated data, including "unboxed" integer values and "unboxed" tuples. This is very uniform and simplifies the code, but it is also slow, and because there is no garbage collection (and probably never will be for this project), will fill up your memory quickly.

Also, the code is currently only supports 64bit architectures, and this is hard-coded in many places.

There is no support for concurrency.

Why it might be interesting to you nevertheless

So if it is not really usable to run programs with, should you care about it? Probably not, but maybe you do for one of these reasons:

So feel free to play around with veggies, and report any issues you have on the GitHub repository.

21 Apr 2017 3:30pm GMT

Rhonda D'Vine: Home

A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.

One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me: Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.

A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …

… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.

I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.

/personal | permanent link | Comments: 3 | Flattr this

21 Apr 2017 8:01am GMT

Noah Meyerhans: Stretch images for Amazon EC2, round 2

Following up on a previous post announcing the availability of a first round of AWS AMIs for stretch, I'm happy to announce the availability of a second round of images. These images address all the feedback we've received about the first round. The notable changes include:

AMI details are listed on the wiki. As usual, you're encouraged to submit feedback to the cloud team via the cloud.debian.org BTS pseudopackage, the debian-cloud mailing list, or #debian-cloud on irc.

21 Apr 2017 4:37am GMT