22 Mar 2019

feedFedora People

Fedora Community Blog: FAS username search in Fedora Happiness Packets

Fedora Happiness Packets - project update

<figure class="aligncenter"></figure>

I have been recently working on incorporation of Fedora Accounts System's username search functionality in the project "Fedora Happiness Packets". After weeks of working, it's so overwhelming to see its on the verge of completion and being incorporated in the project.

About the project

The search functionality is used to find the name and email address of Fedora Accounts System's users from their username, making it a lot easier for any sender to send happiness packets to a particular user with the knowledge of just their username.

Getting started with python-fedora API

For incorporating the search, python-fedora API is used to retrieve the data. After authenticating as a genuine fas-user by passing credentials to AccountSystem, we can retrieve the data using the method person_by_username of a fas2 object.

Problems encountered

The solution to the problem statement was simple. What made the process challenging was lack of proper documentation of the Python-Fedora module. Since credentials like FAS username and FAS password were required for authenticating the user, the main goal was to use the data while the user logs into Fedora Happiness Packets.

I was aiming to use OpenID Connect, the client id, client secret which is used while registering the application to the OpenID provider (Ipsilon in this case). But the fas-client we have in python-fedora does not support OpenID Connect Authentication. This was the major problem which created a major stuck in the proceeding.

Another setback was Django's crispy forms. Since we are using Crispy forms to create models and render the layout in the front end, it was difficult for me to access individual form elements since the whole concept was very new to me.

Quick fix

After getting solution recommendation from other admins of Fedora, I finally got a solution to pass through. Since the search functionality requires only an authenticated user, which necessarily may not be the user who logs in, we can use a testing Username and testing Password in the case of development environment. For testing, we can make a json file from where the original credentials and the values are read into the project.

What I learnt?

I worked in Django for the very first time and it was such an overwhelming experience. I got to learn most of the concepts of Django. How it works, how the data flows, how data gets rendered in the front end, etc. The concept of Django's Crispy forms was something really new to me and I learnt how to deal with the it. Every time, I rely on documentation to get into details, but for the first time I was successfully able to get what is actually happening by going through the code manually.

My experience

I had really enjoyed working with such an welcoming community. Almost all of my doubts are cleared during this application process. What actually I learnt that I am gonna keep with myself forever is, "There is always an alternative solution to any problem! We just need to minimize the gap between its actual existence and our knowledge of its being".

Vote of Thanks!

Thanks to Justin (@jflory7) for helping me with my piles of doubts and queries. Jona (@jonatoni) was very kind to find explicit time to frame my ideas, thanks to her. A special thanks to Clement (@cverna) for helping me proceed with a viable solution, during one of the major hurdle I faced.

Thank you 🙂

The post FAS username search in Fedora Happiness Packets appeared first on Fedora Community Blog.

22 Mar 2019 8:15am GMT

Fabian Affolter: Fedora Security Lab

The Fedora Security Lab was released as part of the Fedora 30 Candidate Beta cycle.

Grab it, test it and report back.

This time we don't want to miss the release because of some last minute changes.

22 Mar 2019 8:14am GMT

21 Mar 2019

feedFedora People

mythcat: Fedora 29 : Testing the dnf python module.

Today we tested with Fedora 29 a python module called DNF.
All users have used this tool.
This python module is not very documented on the internet.
A more complex example can be found on DNF tool documentation.
I tried to see what I can get from this module.
Let's start installing it with the pip tool:

$ pip install dnf --user

Here are some tests that I managed to run in the python shell.

[mythcat@desk ~]$ python
Python 2.7.15 (default, Oct 15 2018, 15:26:09)
[GCC 8.2.1 20180801 (Red Hat 8.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> import dnf
>>> dir(dnf)
['Base', 'Plugin', 'VERSION', '__builtins__', '__doc__', '__file__', '__name__', '__package__',
'__path__', '__version__', 'base', 'callback', 'cli', 'comps', 'conf', 'const', 'crypto', 'db',
'dnf', 'dnssec', 'drpm', 'exceptions', 'goal', 'history', 'i18n', 'lock', 'logging', 'match_counter',
'module', 'package', 'persistor', 'plugin', 'pycomp', 'query', 'repo', 'repodict', 'rpm', 'sack',
'selector', 'subject', 'transaction', 'unicode_literals', 'util', 'warnings', 'yum']
>>> import dnf.conf
>>> print(dnf.conf.Conf())
[main]
assumeno: 0
assumeyes: 0
autocheck_running_kernel: 1
bandwidth: 0
best: 0
...
>>> import dnf.module
>>> import dnf.rpm
>>> import dnf.cli
>>> base = dnf.Base()
>>> base.update_cache()
True

This read all repositories:


>>> base.read_all_repos()

You need to read the sack for querying:


>>> base.fill_sack()
<dnf>
>>> base.sack_activation = True</dnf>

Create a query to matches all packages in sack:


>>> qr=base.sack.query()

Get only available packages:


>>> qa=qr.available()

Get only installed packages:


>>> qi=qr.installed()
>>> q_a=qa.run()
>>> for pkg in qi.run():
... if pkg not in q_a:
... print('%s.%s' % (pkg.name, pkg.arch))
...
NetworkManager-openvpn.x86_64
NetworkManager-openvpn-gnome.x86_64
coolkey.x86_64
glibc-debuginfo.x86_64
glibc-debuginfo-common.x86_64
kernel.x86_64
kernel.x86_64
kernel-core.x86_64
kernel-core.x86_64

Get all packages installed on Linux:


>>> q_i=qi.run()
>>> for pkg in qi.run():
... print('%s.%s' % (pkg.name, pkg.arch))

You can see more about the Python programming language on my blog.

21 Mar 2019 7:41pm GMT

Remi Collet: PHP version 7.2.17RC1 and 7.3.4RC1

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.3.4RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 27-29 and Enterprise Linux.

RPM of PHP version 7.2.16RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Fedora 27 and Enterprise Linux.

emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Notice: version 7.3.4RC1 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72, php73)

Base packages (php)

21 Mar 2019 4:11pm GMT

Remi Collet: Small history about QA

Despite I'm mainly a developer, I now use most of my time on doing QA on PHP projects.

Here is, around release of versions7.2.17RC1 and 7.3.4RC1 a report which should help to understand this activity.

1. Presentation

Usually, tests are done by PHP developers, particularly thanks to travis and then by users who will install the RC version available 2 weeks before a GA version.

The PHP project follow a release process (cf README.RELEASE_PROCESS) which gives 2 days between the preparation of a version, the Tuesday on git, and the Thursday its announcement in the mailing lists. These 2 days are especially designed to allow the build of binary packages (mostly by Microsoft and often by me for my repository) and to allow a last QA check which mays allow to discover some late issue.

When the new versions were available (on Tuesday afternoon) I start building the packages for my repostiory, givinf more coverage than the current travis configuration:

I also run the build of the 7.3.4RC1 package in Fedora rawhide to trigger the re-build of all the PHP stack in Koschei, one of the CI tools of the Fedora project.

Notice : time to build all the packages for all the targets is about 3h for each version ! (I really need a faster builder).

2. Discoverd issues

2.1. Failed tests with pcre2 version 10.33RC1

Already available in rawhide, this version introduce a change in some error message, making 2 tests to fail.

Minor issue, fixed in PHP 7.3+: commit c421d9a.

2.2. Failed tests on 32-bit

In fix of bug #76117 the output of var_export have changed, make 2 tests to fail on 32-bit.

After confirmation by the autor of the change, tests have been fixed in PHP 7.2+ : commits a467a89 and 5c8d69b.

2.3. Regression

Koschei allow to discover very quickly a important regression in the run of the "make test" command. After digging, this regression was introduced in the fix of bug #77609, read the comments on the commit 3ead672.

After discussion between the Release managers, it have been choosen to:

The version which wil be announced shortly will not be affected byt this regression.

3. Conclusion

To ensure of the quality of PHP, of no regression is a complex, long and serious work. Thanks to all the actors, developers, QA team and users, this works pretty well.

So, if you use PHP in a development environment, it is essential to install the RC versions to detect and report us quickly any problem, so we can react before the finale version.

For users of my repository, the RC versions of PHP and various extensions are nearly always available in the testing repositories.

21 Mar 2019 3:10pm GMT

Daniel Pocock: Don't trust me. Trust the voters.

On 9 March, when I was the only member of the Debian community to submit a nomination and fully-fledged platform four minutes before the deadline, I did so on the full understanding that voters have the option to vote "None of the above".

In other words, knowing that nobody can win by default, voters could reject and humiliate me.

Or worse.

My platform had been considered carefully over many weeks, despite a couple of typos. If Debian can't accept that, maybe I should write typos for the White House press office?

One former leader of the project, Steve McIntyre, replied:

I don't know what you think you're trying to achieve here

Hadn't I explained what I was trying to achieve in my platform? Instead of pressing the "send put down" button, why not try reading it?

Any reply in support of my nomination has been censored, so certain bullies create the impression that theirs is the last word.

I've put myself up for election before yet I've never, ever been so disappointed. Just as Venezuela's crisis is now seen as a risk to all their neighbours, the credibility of elections and membership status is a risk to confidence throughout the world of free software. It has already happened in Linux Foundation and FSFE and now we see it happening in Debian.

In student politics, I was on the committee that managed a multi-million dollar budget for services in the union building and worked my way up to become NUS ambassador to Critical Mass, paid to cycle home for a year and sharing an office with one of the grand masters of postal voting: Voters: 0, Cabals: 1.

Ironically, the latter role is probably more relevant to the skills required to lead a distributed organization like Debian. Critical Mass rides have no leader at all.

When I volunteered to be FSFE Fellowship representative, I faced six other candidates. On the first day of voting, I was rear-ended by a small van, pushed several meters along the road and thrown off a motorbike, half way across a roundabout. I narrowly missed being run over by a bus.

It didn't stop me. An accident? Russians developing new tactics for election meddling? Premonition of all the backstabbings to come? Miraculously, the Fellowship still voted for me to represent them.

Nonetheless, Matthias Kirschner, FSFE President, appointed one of the rival candidates to a superior class of membership just a few months later. He also gave full membership rights to all of his staff, ensuring they could vote in the meeting to remove elections from the constitution. Voters: 0, Cabals: 2.

My platform and photo for the FSFE election also emphasizes my role in Debian and some Debian people have always resented that, hence their pathological obsession with trying to control me or discredit me.

Yet in Debian's elections, I've hit a dead-end. The outgoing leader of the project derided me for being something less than a "serious" candidate, despite the fact I was the only one who submitted a nomination before the deadline. People notice things like that. It doesn't stick to me, it sticks to Debian.

I thank Chris Lamb for interjecting, because it reveals a lot about today's problems. A series of snipes like that, usually made in private, have precipitated increasing hostility in recent times.

When I saw Lamb's comment, I couldn't help erupting in a fit of laughter. The Government of Lamb's own country, the UK, was elected under the slogan Strong and stable leadership. There used to be a time when the sun never set on the British empire, today the sun never sets on laughter about their lack of a serious plan for Brexit. Serious leadership appears somehwat hard to find. Investigations found that the Pro-Brexit movement cheated with help from Cambridge Analytica and violations of campaign spending limits but the vote won't be re-run (yet). Voters: 0, Cabals: 3.

It is disappointing when a leader seeks to vet his replacement in this way. In Venezuela, Hugo Chavez assured everybody that Nicolas Maduro was the only serious candidate who could succeed him. Venezuelans can see the consequences of such interventions by outgoing leaders clearly, but only during daylight, because the power has been out continuously for more than a week now. Many of their best engineers emigrated and Debian risks similar phenomena with these childish antics.

The whole point of a free and fair election is that voters are the ultimate decision maker and we all put our trust in the voters alone to decide who is the most serious candidate. I remain disappointed that Lamb was not willing to talk face-to-face with those people he had differences with.

In any other context, the re-opening of nominations and the repeated character attacks, facilitated by no less than another candidate who already holds office in the Debian account managers team would be considered as despicable as plagiarism and doping. So why is this acceptable in Debian? Voters: 0, Cabals: 4. If you ran a foot race this way, nobody would respect the outcome.

Having finished multiple cross countries, steeplechases and the odd marathon, why can't I even start in Debian's annual election?

In his interview with Mr Sam Varghese of IT Wire, rival candidate Joerg "Ganeff" Jaspert talks about "mutual trust". Well, he doesn't have to. I put my trust in the voters. That's democracy. Who is afraid of it? That's what a serious vote is all about.

Jaspert's team have gone to further lengths to gain advantages, spreading rumours on the debian-private mailing list that they have "secret evidence" to justify their behaviour. It is amusing to see such ridiculous claims being made in Debian at the same time that Maduro in Venezuela is claiming to have secret evidence that his rival, Guaido, sabotaged the electricity grid. The golden rule of secret evidence: don't hold your breath waiting for it to materialize.

While Maduro's claims of sabotage seem far-fetched, it is widely believed that Republican-friendly Enron played a significant role in Californian power shortages, swinging public mood against the Democrat incumbent and catapulting the world's first Governator into power (excuse the pun). Voters: 0, Cabals: 5.

If the DAMs do have secret evidence against any Debian Developer, it is only fair to show the evidence to the Developer and give that person a right of reply. If such "evidence" is spread behind somebody's back, it is because it wouldn't stand up to any serious scrutiny.

Over the last six months, Jaspert, Lamb and Co can't even decide whether they've demoted or expelled certain people. That's not leadership. It's a disgrace. If people are trusted to choose me as the Debian Project Leader, I guarantee that no other volunteer will be put through such intimidation and shaming ever again.

After writing a blog about human rights in January, it is Jaspert who censored it from Planet Debian just hours later:

Many people were mystified. Why would my blog post about human rights be censored by Debian? People have been scratching their heads trying to work out how it could even remotely violate the code of conduct. Is it because the opening quote came from Jaspert himself and he didn't want his cavalier attitude put under public scrutiny?

This is not involving anything from the universal declaration of human rights. We are simply a project of volunteers which is free to chose its members as it wishes.

which is a convenient way of eliminating competitors. After trampling on my blog and my nomination for the DPL election, it is simply a coincidence that Jaspert was the next to put his hand up and nominate.

In Jonathan Carter's blog about his candidacy, he quotes Ian Murdock:

You don't want design by committee, but you want to tap in to the wisdom of the crowd.... the crowd is the most intelligent of all.

If that is true, why is a committee of just three people, one of whom is a candidate, telling the crowd who they can and can't vote for?

If that isn't a gerrymander, what is?

Following through on the threat

If you are going to use veiled threats to keep your developers in line, every now and then, you have to follow through, as Jaspert has done recently using his DAM position to make defamatory statements in the press.

If Jaspert's organization really is willing to threaten and shame volunteers and denounce human rights, as he did in this quote, then I wouldn't want to be a part of it anyway, consider this my retirement and resignation and eliminate any further questions about my status. Nonetheless, I remain an independent Debian Developer just as committed to serving Debian users as ever before. Voters: 0, Cabals: 6.

I remain ready and willing to face "None of the above" and any other candidate, serious or otherwise, on a level playing field, to serve those who would vote for me over and above those who seek to blackmail me and push me around with secret evidence and veiled threats.

21 Mar 2019 9:07am GMT

Ingvar Hagelund: Packages of varnish-6.2.0 with matching vmods, for el6 and el7

The Varnish Cache project recently released a new upstream version 6.2 of Varnish Cache. I updated the fedora rawhide package yesterday. I have also built a copr repo with varnish packages for el6 and el7 based on the fedora package. A snapshot of matching varnish-modules (based on Nils Goroll's branch) is also available.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish62/.

vmods included in varnish-modules:
vmod-bodyaccess
vmod-cookie
vmod-header
vmod-saintmode
vmod-tcp
vmod-var
vmod-vsthrottle
vmod-xkey

21 Mar 2019 8:29am GMT

Fedora Community Blog: AskFedora refresh: we’ve moved to Discourse!

The Fedora Project community

We have been working on moving AskFedora to a Discourse instance after seeing how well the community took to discussion.fedoraproject.org. After working on it for a few weeks now, we're happy to report that the new AskFedora is now ready for use at https://askbeta.fedoraproject.org.

The new AskFedora!

The new AskFedora is a Discourse instance hosted for us by Discourse, similar to discussion.fedoraproject.org. However, where discussion.fedoraproject.org is meant for development discussion within the community, AskFedora is meant for end-user troubleshooting. While we did toy with the idea of simply using discussion.fedoraproject.org for both purposes, we felt it was a bit of a risk of the mix hampering both use cases. So, the decision was made to stick to the current organisation and use a separate Discourse instance for user queries.

Getting started: logging in and language selection

The new AskFedora is limited to FAS (Fedora Account System) logins only. This is unlike the Askbot instance where we also permitted social media and other logins. Limiting the logins to FAS permits us to have better control over the instance, and makes it much easier to gather data on usage and so on. Setting up a new FAS instance is quite trivial, so we do not expect this to be an issue to end-users either.

Another way in which AskFedora on Discourse is different from AskFedora on Askbot, is that we chose not to host per-language subsites. Instead, we've leveraged Discourse categories and user-groups to support languages.

When you login for the first time, you will only see the general categories:

These are common to all users. Based on interest from the community, and after verifying that we had community members willing to oversee these languages, the new AskFedora currently supports English, Spanish, Italian, and Persian. Here is how:

Each language has an associated user-group. All users can join and leave these language user-groups at any time. Membership to each user-group gives access to "translated" categories, i.e., identical categories set up for users of the particular language group. Users can join as many language groups as they wish!

Categories are loosely based on the lifecycle of a Fedora release. The top levels ask the question "what stage of the Fedora life-cycle are you at?". The next level tries to be more specific to ask something on the lines of "what tool are you using?". These categories are only meant to help organise the forum somewhat. They are not set in stone, and of course, lots of topics may fit into a multitude of categories. We leave it up to the users of the Forum to choose the appropriate category for their query.

So, when you do login, please do go to the "Start here" category as the banner requests. We have a topic in each supported language documenting what we've written here-how to join the appropriate language group and get started.

Feedback and next steps

At this time, we are only announcing the new instance to the community. Hence, this post on the community blog first. The forum will be announced to the wider user-base on the Fedora magazine a week or two later. This gives us time to have a set of community members on the forum already to help end-users when they do get started. This also gives us time to collect feedback from the community and make tweaks to improve the user-experience before the "official launch". Please use the "Site Feedback" category to drop us comments. Before the forum is announced to the wider audience, we will also update the URL to use https://ask.fedoraproject.org and a redirect from https://askbeta.fedoraproject.org will be put in place to ensure a smooth transition for current users.

The usual reminders

The forum (all forums, channels) are extensions of the Fedora community. They are tools that enable us to communicate with each other. Therefore, everything that occurs on these must follow our Code of Conduct. In short, please remember to "be excellent to each other". There will always be disagreements, and us being us, tempers will flare. However, before you type out a reply, repeat to your self: "be excellent to each other" again and again, until your draft has lost its aggression/annoyance/negative connotations. This also applies to trolling-even when pointing it out, let's stay excellent to each other. If you need any help, the forum staff are always there to step in-just drop us a message.

As a closing word, we're grateful to everyone that put the work in to make this refresh happen-especially the Askbot developers that have hosted AskFedora for us till now, and the Discourse team that will host it for us from now. It has taken quite a few hours of discussion, planning, and work to set things up the way we felt it would help users most. All of this happened on the Fedora Join SIG's pagure project. We are always looking for more hands to help, and we are even happier if we can pass on some of what we have learned in our time in the Fedora community to other members. Please, do get in touch!

The post AskFedora refresh: we've moved to Discourse! appeared first on Fedora Community Blog.

21 Mar 2019 7:33am GMT

Peter Hutterer: Using hexdump to print binary protocols

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:


$ hexdump -v -e '1/2 "%d\n"' <filename>
-11643
23698
0
0
-5013
6
0
0

This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.


$ hexdump -v -e '1/2 "%d "' -e '1/2 "%x\n"' <filename>
-11568 d2d0
23698 5c92
0 0
0 0
6355 18d3
1 1
0 0

This prints the same 2-byte input value, once as decimal signed integer, once as lowercase hex. If we have multiple identical things to print, we can do this:


$ hexdump -v -e '2/2 "%6d "' -e '" hex:"' -e '4/1 " %x"' -e '"\n"'
-10922 23698 hex: 56 d5 92 5c
0 0 hex: 0 0 0 0
14879 1 hex: 1f 3a 1 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0

Which prints two elements, each size 2 as integers, then the same elements as four 1-byte hex values, followed by a linebreak. %6d is a standard printf instruction and documented in the manual.

Let's go and print our protocol. The struct representing the protocol is this one:


struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) && !defined(__KERNEL__)
struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
#else
__kernel_ulong_t __sec;
#if defined(__sparc__) && defined(__arch64__)
unsigned int __usec;
#else
__kernel_ulong_t __usec;
#endif
#define input_event_sec __sec
#define input_event_usec __usec
#endif
__u16 type;
__u16 code;
__s32 value;
};

So we have two longs for sec and usec, two shorts for type and code and one signed 32-bit int. Let's print it:


$ hexdump -v -e '"E: " 1/8 "%u." 1/8 "%06u" 2/2 " %04x" 1/4 "%5d\n"' /dev/input/event22
E: 1553127085.097503 0002 0000 1
E: 1553127085.097503 0002 0001 -1
E: 1553127085.097503 0000 0000 0
E: 1553127085.097542 0002 0001 -2
E: 1553127085.097542 0000 0000 0
E: 1553127085.108741 0002 0001 -4
E: 1553127085.108741 0000 0000 0
E: 1553127085.118211 0002 0000 2
E: 1553127085.118211 0002 0001 -10
E: 1553127085.118211 0000 0000 0
E: 1553127085.128245 0002 0000 1

And voila, we have our structs printed in the same format evemu-record prints out. So with nothing but hexdump, I can generate output I can then parse with my existing scripts on another box.

21 Mar 2019 12:30am GMT

20 Mar 2019

feedFedora People

Ben Williams: F29-20190319 updated Live isos released

The Fedora Respins SIG is pleased to announce the latest release of Updated F29-20190319 Live ISOs, carrying the 4.20.16-200 kernel.

This set of updated isos will save considerable amounts of updates after install. ((for new installs.)(New installs of Workstation have 1.2GB of updates)).

This set also includes a updated iso of the Security Lab.

A huge thank you goes out to irc nicks dowdle, Southern-Gentlem for testing these iso.

We would also like to thank Fedora- QA for running the following Tests on our ISOs.:

https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=29&build=FedoraRespin-29-updates/20190319.0&groupid=1

As always our isos can be found at http://tinyurl.com/Live-respins .

20 Mar 2019 12:46pm GMT

Fedora Magazine: 4 cool terminal multiplexers

The Fedora OS is comfortable and easy for lots of users. It has a stunning desktop that makes it easy to get everyday tasks done. Under the hood is all the power of a Linux system, and the terminal is the easiest way for power users to harness it. By default terminals are simple and somewhat limited. However, a terminal multiplexer allows you to turn your terminal into an even more incredible powerhouse. This article shows off some popular terminal multiplexers and how to install them.

Why would you want to use one? Well, for one thing, it lets you logout of your system while leaving your terminal session undisturbed. It's incredibly useful to logout of your console, secure it, travel somewhere else, then remotely login with SSH and continue where you left off. Here are some utilities to check out.

One of the oldest and most well-known terminal multiplexers is screen. However, because the code is no longer maintained, this article focuses on more recent apps. ("Recent" is relative - some of these have been around for years!)

Tmux

The tmux utility is one of the most widely used replacements for screen. It has a highly configurable interface. You can program tmux to start up specific kinds of sessions based on your needs. You'll find a lot more about tmux in this article published earlier:

<figure class="wp-block-embed is-type-rich is-provider-fedora-magazine">

Use tmux for a more powerful terminal

<iframe class="wp-embedded-content" data-secret="AOTxgovebe" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/use-tmux-more-powerful-terminal/embed/#?secret=AOTxgovebe" title=""Use tmux for a more powerful terminal" - Fedora Magazine" width="600"></iframe>

</figure>

Already a tmux user? You might like this additional article on making your tmux sessions more effective.

To install tmux, use the sudo command along with dnf, since you're probably in a terminal already:

$ sudo dnf install tmux

To start learning, run the tmux command. A single pane window starts with your default shell. Tmux uses a modifier key to signal that a command is coming next. This key is Ctrl+B by default. If you enter Ctrl+B, C you'll create a new window with a shell in it.

Here's a hint: Use Ctrl+B, ? to enter a help mode that lists all the keys you can use. To keep things simple, look for the lines starting with bind-key -T prefix at first. These are keys you can use right after the modifier key to configure your tmux session. You can hit Ctrl+C to exit the help mode back to tmux.

To completely exit tmux, use the standard exit command or Ctrl+D keystroke to exit all the shells.

Dvtm

You might have recently seen the Magazine article on dwm, a dynamic window manager. Like dwm, dvtm is for tiling window management - but in a terminal. It's designed to adhere to the legacy UNIX philosophy of "do one thing well" - in this case managing windows in a terminal.

Installing dvtm is easy as well. However, if you want the logout functionality mentioned earlier, you'll also need the abduco package which handles session management for dvtm.

$ sudo dnf install dvtm abduco

The dvtm utility has many keystrokes already mapped to allow you to manage windows in the terminal. By default, it uses Ctrl+G as its modifier key. This keystroke tells dvtm that the following character is going to be a command it should process. For instance, Ctrl+G, C creates a new window and Ctrl+G, X removes it.

For more information on using dvtm, check out the dvtm home page which includes numerous tips and get-started information.

Byobu

While byobu isn't truly a multiplexer on its own - it wraps tmux or even the older screen to add functions - it's worth covering here too. Byobu makes terminal multiplexers better for novices, by adding a help menu and window tabs that are slightly easier to navigate.

Of course it's available in the Fedora repos as well. To install, use this command:

$ sudo dnf install byobu

By default the byobu command runs screen underneath, so you might want to run byobu-tmux to wrap tmux instead. You can then use the F9 key to open up a help menu for more information to help you get started.

Mtm

The mtm utility is one of the smallest multiplexers you'll find. In fact, it's only about 1000 lines of code! You might find it helpful if you're in a limited environment such as old hardware, a minimal container, and so forth. To get started, you'll need a couple packages.

$ sudo dnf install git ncurses-devel make gcc

Then clone the repository where mtm lives:

$ git clone https://github.com/deadpixi/mtm.git

Change directory into the mtm folder and build the program:

$ make

You might receive a few warnings, but when you're done, you'll have the very small mtm utility. Run it with this command:

$ ./mtm

You can find all the documentation for the utility on its GitHub page.

These are just some of the terminal multiplexers out there. Got one you'd like to recommend? Leave a comment below with your tips and enjoy building windows in your terminal!


Photo by Michael on Unsplash.

20 Mar 2019 8:00am GMT

19 Mar 2019

feedFedora People

Kiwi TCMS: Kiwi TCMS 6.6

We're happy to announce Kiwi TCMS version 6.6! This is a medium severity security update, improvement and bug-fix update. You can explore everything at https://demo.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  c4734f98ca37    971.3 MB
kiwitcms/kiwi       6.2     7870085ad415    957.6 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955.7 MB
kiwitcms/kiwi       6.1     b559123d25b0    970.2 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970.1 MB
kiwitcms/kiwi       5.3.1   a420465852be    976.8 MB

Changes since Kiwi TCMS 6.5.3

Security

  • Explicitly require marked v0.6.1 to fix medium severity ReDoS vulnerability. See SNYK-JS-MARKED-73637

Improvements

  • Update python-gitlab from 1.7.0 to 1.8.0
  • Update django-contrib-comments from 1.9.0 to 1.9.1
  • More strings marked as translatable (Christophe CHAUVET)
  • When creating new TestCase you can now change notification settings. Previously this was only possible during editing
  • Document import-export approaches. Closes Issue #795
  • Document available test automation plugins
  • Improve documentation around Docker customization and SSL termination
  • Add documentation example of reverse rroxy configuration for HAProxy (Nicolas Auvray)
  • TestPlan.add_case() will now set the sortkey to highest in plan + 10 (Rik)
  • Add LinkOnly issue tracker. Fixes Issue #289
  • Use the same HTML template for both TestCase new & edit
  • New API methods for adding, removing and listing attachments. Fixes Issue #446:
    • TestPlan.add_attachment()
    • TestCase.add_attachment()
    • TestPlan.list_attachments()
    • TestCase.list_attachments()
    • Attachments.remove_attachment()

Database migrations

  • Populate missing TestCase.text history. In version 6.5 the TestCase model was updated to store the text into a single field called text instead of 4 separate fields. During that migration historical records were updated to have the new text field but values were not properly assigned.

    The "effect" of this is that in TestCaseRun records you were not able to see the actual text b/c it was None.

    This change ammends 0006_merge_text_field_into_testcase_model for installations which have not yet migrated to 6.5 or later. We also provide the data-only migration 0009_populate_missing_text_history which will inspect the current state of the DB and copy the text to the last historical record.

Removed functionality

  • Remove legacy reports. Closes Issue #657

  • Remove "Save & Continue" functionality from TestCase edit page

  • Renamed API methods:

    • TestCaseRun.add_log() -> TestCaseRun.add_link()
    • TestCaseRun.remove_log() -> TestCaseRun.remove_link()
    • TestCaseRun.get_logs() -> TestCaseRun.get_links()

    These methods work with URL links, which can be added or removed to test case runs.

Bug fixes

  • Remove hard-coded timestamp in TestCase page template, References Issue #765
  • Fix handling of ?from_plan URL parameter in TestCase page
  • Make TestCase.text occupy 100% width when rendered. Fixes Issue #798
  • Enable markdown.extensions.tables. Fixes Issue #816
  • Handle form erros and default values for TestPlan new/edit. Fixes Issue #864
  • Tests + fix for failing TestCase rendering in French
  • Show color-coded statuses on dashboard page when seen with non-English language
  • Refactor check for confirmed test cases when editting to work with translations
  • Fix form values when filtering test cases inside TestPlan. Fixes Issue #674 (@marion2016)
  • Show delete icon for attachments. Fixes Issue #847

Refactoring

  • Remove unused .current_user instance attribute
  • Remove EditCaseForm and use NewCaseForm instead, References Issue #708, Issue #812
  • Fix "Select All" checkbox. Fixes Issue #828 (Rady)

Translations

How to upgrade

If you are using Kiwi TCMS as a Docker container then:

cd Kiwi/
git pull
docker-compose down
docker pull kiwitcms/kiwi
docker pull centos/mariadb
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Don't forget to backup before upgrade!

WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

# starting from an older Kiwi TCMS version
docker-compose down
docker pull kiwitcms/kiwi:<next_upgrade_version>
edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate
# repeat until you have reached latest

Happy testing!

19 Mar 2019 8:40pm GMT

Red Hat Security: The Product Security Blog has moved!

Red Hat Product Security has joined forces with other security teams inside Red Hat to publish our content in a common venue using the Security channel of the Red Hat Blog. This move provides a wider variety of important Security topics, from experts all over Red Hat, in a more modern and functional interface. We hope everyone will enjoy the new experience!

English

Category

Secure

Tags

security

19 Mar 2019 7:38pm GMT

Michael Catanzaro: Epiphany Technology Preview Upgrade Requires Manual Intervention

Jan-Michael has recently changed Epiphany Technology Preview to use a separate app ID. Instead of org.gnome.Epiphany, it will now be org.gnome.Epiphany.Devel, to avoid clashing with your system version of Epiphany. You can now have separate desktop icons for both system Epiphany and Epiphany Technology Preview at the same time.

Because flatpak doesn't provide any way to rename an app ID, this means it's the end of the road for previous installations of Epiphany Technology Preview. Manual intervention is required to upgrade. Fortunately, this is a one-time hurdle, and it is not hard:

$ flatpak uninstall org.gnome.Epiphany

Uninstall the old Epiphany…

$ flatpak install gnome-apps-nightly org.gnome.Epiphany.Devel org.gnome.Epiphany.Devel.Debug

…install the new one, assuming that your remote is named gnome-apps-nightly (the name used locally may differ), and that you also want to install debuginfo to make it possible to debug it…

$ mv ~/.var/app/org.gnome.Epiphany ~/.var/app/org.gnome.Epiphany.Devel

…and move your personal data from the old app to the new one.

Then don't forget to make it your default web browser under System Settings -> Details -> Default Applications. Thanks for testing Epiphany Technology Preview!

19 Mar 2019 6:39pm GMT

Roland Wolters: Of debugging Ansible Tower and underlying cloud images

<figure class="alignright is-resized">Ansible Logo</figure>

Recently I was experimenting with Tower's isolated nodes feature - but somehow it did not work in my environment. Debugging told me a lot about Ansible Tower - and also why you should not trust arbitrary cloud images.

Background - Isolated Nodes

Ansible Tower has a nice feature called "isolated nodes". Those are dedicated Tower instances which can manage nodes in separated environments - basically an Ansible Tower Proxy.

An Isolated Node is an Ansible Tower node that contains a small piece of software for running playbooks locally to manage a set of infrastructure. It can be deployed behind a firewall/VPC or in a remote datacenter, with only SSH access available. When a job is run that targets things managed by the isolated node, the job and its environment will be pushed to the isolated node over SSH, where it will run as normal.

Ansible Tower Feature Spotlight: Instance Groups and Isolated Nodes

Isolated nodes are especially handy when you setup your automation in security sensitive environments. Think of DMZs here, of network separation and so on.

I was fooling around with a clustered Tower installation on RHEL 7 VMs in a cloud environment when I run into trouble though.

My problem - Isolated node unavailable

Isolated nodes - like instance groups - have a status inside Tower: if things are problematic, they are marked as unavailable. And this is what happened with my instance isonode.remote.example.com running in my lab environment:

<figure class="wp-block-image"><figcaption>Ansible Tower showing an instance node as unavailable</figcaption></figure>

I tried to turn it "off" and "on" again with the button in the control interface. It made the node available, it was even able to executed jobs - but it became quickly unavailable soon after.

Analysis

So what happened? The Tower logs showed a Python error:

# tail -f /var/log/tower/tower.log
fatal: [isonode.remote.example.com]: FAILED! => {"changed": false,
"module_stderr": "Shared connection to isonode.remote.example.com
closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n
File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1552400585.04
-60203645751230/AnsiballZ_awx_capacity.py\", line 113, in <module>\r\n
_ansiballz_main()\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 105, in
_ansiballz_main\r\n    invoke_module(zipped_mod, temp_path,
ANSIBALLZ_PARAMS)\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 48, in
invoke_module\r\n    imp.load_module('__main__', mod, module, MOD_DESC)\r\n
File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 74, in
<module>\r\n  File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\",
line 60, in main\r\n  File
\"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 27, in
get_cpu_capacity\r\nAttributeError: 'module' object has no attribute
'cpu_count'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact
error", "rc": 1}

PLAY RECAP *********************************************************************
isonode.remote.example.com : ok=0    changed=0    unreachable=0    failed=1  

Apparently a Python function was missing. If we check the code we see that indeed in line 27 of file awx_capacity.py the function psutil.cpu_count() is called:

def get_cpu_capacity():
    env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
    cpu = psutil.cpu_count()

Support for this function was added in version 2.0 of psutil:

2014-03-10
Enhancements
424: [Windows] installer for Python 3.X 64 bit.
427: number of logical and physical CPUs (psutil.cpu_count()).

psutil history

Note the date here: 2014-03-10 - pretty old! I check the version of the installed package, and indeed the version was pre-2.0:

$ rpm -q --queryformat '%{VERSION}\n' python-psutil
1.2.1

To be really sure and also to ensure that there was no weird function backporting, I checked the function call directly on the Tower machine:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import inspect
>>> import psutil as module
>>> functions = inspect.getmembers(module, inspect.isfunction)
>>> functions
[('_assert_pid_not_reused', <function _assert_pid_not_reused at
0x7f9eb10a8d70>), ('_deprecated', <function deprecated at 0x7f9eb38ec320>),
('_wraps', <function wraps at 0x7f9eb414f848>), ('avail_phymem', <function
avail_phymem at 0x7f9eb0c32ed8>), ('avail_virtmem', <function avail_virtmem at
0x7f9eb0c36398>), ('cached_phymem', <function cached_phymem at
0x7f9eb10a86e0>), ('cpu_percent', <function cpu_percent at 0x7f9eb0c32320>),
('cpu_times', <function cpu_times at 0x7f9eb0c322a8>), ('cpu_times_percent',
<function cpu_times_percent at 0x7f9eb0c326e0>), ('disk_io_counters',
<function disk_io_counters at 0x7f9eb0c32938>), ('disk_partitions', <function
disk_partitions at 0x7f9eb0c328c0>), ('disk_usage', <function disk_usage at
0x7f9eb0c32848>), ('get_boot_time', <function get_boot_time at
0x7f9eb0c32a28>), ('get_pid_list', <function get_pid_list at 0x7f9eb0c4b410>),
('get_process_list', <function get_process_list at 0x7f9eb0c32c08>),
('get_users', <function get_users at 0x7f9eb0c32aa0>), ('namedtuple',
<function namedtuple at 0x7f9ebc84df50>), ('net_io_counters', <function
net_io_counters at 0x7f9eb0c329b0>), ('network_io_counters', <function
network_io_counters at 0x7f9eb0c36500>), ('phymem_buffers', <function
phymem_buffers at 0x7f9eb10a8848>), ('phymem_usage', <function phymem_usage at
0x7f9eb0c32cf8>), ('pid_exists', <function pid_exists at 0x7f9eb0c32140>),
('process_iter', <function process_iter at 0x7f9eb0c321b8>), ('swap_memory',
<function swap_memory at 0x7f9eb0c327d0>), ('test', <function test at
0x7f9eb0c32b18>), ('total_virtmem', <function total_virtmem at
0x7f9eb0c361b8>), ('used_phymem', <function used_phymem at 0x7f9eb0c36050>),
('used_virtmem', <function used_virtmem at 0x7f9eb0c362a8>), ('virtmem_usage',
<function virtmem_usage at 0x7f9eb0c32de8>), ('virtual_memory', <function
virtual_memory at 0x7f9eb0c32758>), ('wait_procs', <function wait_procs at
0x7f9eb0c32230>)]

Searching for a package origin

So how to solve this issue? My first idea was to get this working by updating the entire code part to the multiprocessor lib:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import multiprocessing
>>> cpu = multiprocessing.cpu_count()
>>> cpu
4

But while I was filling a bug report I wondered why RHEL shipped such an ancient library. After all, RHEL 7 was released in June 2014, and psutil had cpu_count available since early 2014! And indeed, a quick search for the package via the Red Hat package search showed a weird result: python-psutil was never part of base RHEL 7! It was only shipped as part of some very, very old OpenStack channels:

<figure class="wp-block-image"><figcaption>access.redhat.com package search, results for python-psutil</figcaption></figure>

Newer OpenStack channels in fact come along with newer versions of python-psutil.

So how did this outdated package end up on this RHEL 7 image? Why was it never updated?

The cloud image is to blame! The package was installed on it - most likely during the creation of the image: python-psutil is needed for OpenStack Heat, so I assume that these RHEL 7 images where once created via OpenStack and then used as the default image in this demo environment.

And after the initial creation of the image the Heat packages were forgotten. In the meantime the image was updated to newer RHEL versions, snapshots were created as new defaults and so on. But since the package in question was never part of the main RHEL repos, it was never changed or removed. It just stayed there. Waiting, apparently, for me 😉

Conclusion

This issue showed me how tricky cloud images can be. Think about your own cloud images: have you really checked all all of them and verified that no package, no start up script, no configuration was changed from the Linux distribution vendor's base setup?

With RPMs this is still manageable, you can track if packages are installed which are not present in the existing channels. But did someone install something with pip? Or any other way?

Take my case: an outdated version of a library was called instead of a much, much more recent one. If there would have been a serious security issue with the library in the meantime, I would have been exposed although my update management did not report any library to be updated.

I learned my lesson to be more critical with cloud images, checking them in more detail in the future to avoid having nasty surprises during production. And I can just recommend that you do that as well.

Advertisements
<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-26942-5c94f881a8655', { collapseEmpty: 'before', sectionId: '26942', width: 300, height: 250 }); }); </script>
<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-114160-5c94f881a8657', { collapseEmpty: 'before', sectionId: '114160', width: 300, height: 250 }); }); </script>

19 Mar 2019 3:09pm GMT

Alexander Larsson: Introducing flat-manager

A long time ago I wrote a blog post about how to maintain a Flatpak repository.

It is still a nice, mostly up to date, description of how Flatpak repositories work. However, it doesn't really have a great answer to the issue called syncing updates in the post. In other words, it really is more about how to maintain a repository on one machine.

In practice, at least on a larger scale (like e.g. Flathub) you don't want to do all the work on a single machine like this. Instead you have an entire build-system where the repository is the last piece.

Enter flat-manager

To support this I've been working on a side project called flat-manager. It is a service written in rust that manages Flatpak repositories. Recently we migrated Flathub to use it, and its seems to work quite well.

At its core, flat-manager serves and maintains a set of repos, and has an API that lets you push updates to it from your build-system. However, the way it is set up is a bit more complex, which allows some interesting features.

Core concept: a build

When updating an app, the first thing you do is create a new build, which just allocates an id that you use in later operations. Then you can upload one or more builds to this id.

This separation of the build creation and the upload is very powerful, because it allows you to upload the app in multiple operations, potentially from multiple sources. For example, in the Flathub build-system each architecture is built on a separate machine. Before flat-manager we had to collect all the separate builds on one machine before uploading to the repo. In the new system each build machine uploads directly to the repo with no middle-man.

Committing or purging

An important idea here is that the new build is not finished until it has been committed. The central build-system waits until all the builders report success before committing the build. If any of the builds fail, we purge the build instead, making it as if the build never happened. This means we never expose partially successful builds to users.

Once a build is committed, flat-manager creates a separate repository containing only the new build. This allows you to use Flatpak to test the build before making it available to users.

This makes builds useful even for builds that never was supposed to be generally available. Flathub uses this for test builds, where if you make a pull request against an app it will automatically build it and add a comment in the pull request with the build results and a link to the repo where you can test it.

Publishing

Once you are satisfied with the new build you can trigger a publish operation, which will import the build into the main repository and do all the required operations, like:

The publish operation is actually split into two steps, first it imports the build result in the repo, and then it queues a separate job to do all the updates needed for the repo. This way if multiple builds are published at the same time the update can be shared. This saves time on the server, but it also means less updates to the metadata which means less churn for users.

You can use whatever policy you want for how and when to publish builds. Flathub lets individual maintainers chose, but by default successful builds are published after 3 hours.

Delta generation

The traditional way to generate static deltas is to run flatpak build-update-repo --generate-static-deltas. However, this is a very computationally expensive operation that you might not want to do on your main repository server. Its also not very flexible in which deltas it generates.

To minimize the server load flat-manager allows external workers that generate the deltas on different machines. You can run as many of these as you want and the deltas will be automatically distributed to them. This is optional, and if no workers connect the deltas will be generated locally.

flat-manager also has configuration options for which deltas should be generated. This allows you to avoid generating unnecessary deltas and to add extra levels of deltas where needed. For example, Flathub no longer generates deltas for sources and debug refs, but we have instead added multiple levels of deltas for runtimes, allowing you to go efficiently to the current version from either one or two versions ago.

Subsetting tokens

flat-manager uses JSON Web Tokens to authenticate API clients. This means you can assign different permission to different clients. Flathub uses this to give minimal permissions to the build machines. The tokens they get only allow uploads to the specific build they are currently handling.

This also allows you to hand out access to parts of the repository namespace. For instance, the Gnome project has a custom token that allows them to upload anything in the org.gnome.Platform namespace in Flathub. This way Gnome can control the build of their runtime and upload a new version whenever they want, but they can't (accidentally or deliberately) modify any other apps.

Rust

I need to mention Rust here too. This is my first real experience with using Rust, and I'm very impressed by it. In particular, the sense of trust I have in the code when I got it past the compiler. The compiler caught a lot of issues, and once things built I saw very few bugs at runtime.

It can sometimes be a lot of work to express the code in a way that Rust accepts, which makes it not an ideal language for sketching out ideas. But for production code it really excels, and I can heartily recommend it!

Future work

Most of the initial list of features for flat-manager are now there, so I don't expect it to see a lot of work in the near future.

However, there is one more feature that I want to see; the ability to (automatically) create subset versions of the repository. In particular, we want to produce a version of Flathub containing only free software.

I have the initial plans for how this will work, but it is currently blocking on some work inside OSTree itself. I hope this will happen soon though.

19 Mar 2019 1:20pm GMT