22 Jul 2018

feedFedora People

Fedora Community Blog: [Week 9-10] GSoC Status Report for Fedora App: Abhishek Sharma

This is Status Report for Fedora App filled by participants on a weekly basis.

Status Report for Abhishek Sharma (thelittlewonder)

Tasks Completed

Fedora Podcasts

Finally, we managed to integrate the Fedora Podcast into the app. You can now browse through the list of Fedora podcast and listen to them, right from the application.

Google Playstore Promotional Designs

Since we are nearing the launch of the app. I took out some time to work on the promotional graphics for the app store. I designed 6 screenshots and a feature graphic on the same lines as the app design.

Redesigned App Icon

I also redesigned the app icon according to Google's new Adaptive Icon Android Guidelines and to match the overall style of the application.

What's Happening

Add Loader

The last piece of the puzzle is to add a loader in the application. Since most of the data is fetched from APIs we do need that to show the progress. Hopefully, the pending PRs will be reviewed this week, and we will be able to integrate the loader this week.

Microcopy Improvement

At certain places in the application, the copy is not clear or is ambiguous. We will work on improving the microcopy in order to present a uniform and clear tone throughout the app.

Add Documentation

The GSoC project is in the last stages. I will improve the documentation for whatever code I have written so that it's easier for other developers to contribute to the development of the Fedora App.

That's all for this week. 👋 I will see you next week with the final project update, hopefully.

Send your feedback at guywhodesigns[at]gmail[dot]com

The post [Week 9-10] GSoC Status Report for Fedora App: Abhishek Sharma appeared first on Fedora Community Blog.

22 Jul 2018 5:39pm GMT

21 Jul 2018

feedFedora People

farhaan: Template Method Design Pattern

This is a continuation of the design pattern series.

I had blogged about Singleton once, when I was using it very frequently. This blog post is about the use of the Template Design Pattern. So let's discuss the pattern and then we can dive into the code and its implementation and see a couple of use cases.

The Template Method Design Pattern is a actually a pattern to follow when there are a series of steps, which need to be followed in a particular order. Well, the next question that arises is, "Isn't every program a series of steps that has to be followed in a particular order?"

The answer is Yes!

This pattern diverges when it becomes a series of functions that has to be executed in the given order. As the name suggests it is a Template Method Design pattern, with stress on the word method, because that is what makes it a different ball game all together.

Let's understand this with an example of Eating in a Buffet. Most of us have follow a set of similar specific steps, when eating at a Buffet. We all go for the starters first, followed by main course and then finally, dessert. (Unless it is Barbeque Nation then it's starters, starters and starters :))

So this is kind of a template for everyone Starters --> Main course --> Desserts.

Keep in mind that content in each category can be different depending on the person but the order doesn't change which gives a way to have a template in the code. The primary use of any design pattern is to reduce duplicate code or solve a specific problem. Here this concept solves the problem of code duplication.

The concept of Template Method Design Pattern depends on, or rather is very tightly coupled with Abstract Classes. Abstract Classes themselves are a template for derived classes to follow but Template Design Pattern takes it one notch higher, where you have a template in a template. Here's an example of a BuffetHogger class.

from abc import ABC, abstractmethod

class BuffetHogger(ABC):

    @abstractmethod
    def starter_hogging(self):
        pass

    @abstractmethod
    def main_course_hogging(self):
        pass

    @abstractmethod
    def dessert_hogging(self):
        pass

    def template_hogging(self):
        self.starter_hogging()
        self.main_course_hogging()
        self.dessert_hogging()

So if you see here the starter_hogging, main_course_hogging and dessert_hogging are abstract class that means base class has to implement it while template_hogging uses these methods and will be same for all base class.

Let's have a Farhaan class who is a BuffetHogger and see how it goes.

class Farhaan(BuffetHogger):
    def starter_hogging(self):
        print("Eat Chicken Tikka")
        print("Eat Kalmi Kebab")

    def __call__(self):
        self.template_hogging()

    def main_course_hogging(self):
        print("Eat Biryani")

    def dessert_hogging(self):
        print("Eat Phirni")
Now you can spawn as many BuffetHogger classes as you want, and they'll all have the same way of hogging. That's how we solve the problem of code duplication
Hope this post inspires you to use this pattern in your code too.
Happy Hacking!

21 Jul 2018 3:27pm GMT

Jonathan Dieter: Small file performance on distributed filesystems - Round 2



View from a bench

Last year, I ran some benchmarks on the GlusterFS, CephFS and LizardFS distributed filesystems, with some interesting results. I had a request to redo the test after a LizardFS RC was released with a FUSE3 client, since it is supposed to give better small file performance.

I did have a request last time to include RozoFS, but, after a brief glance at the documentation, it looks like it requires a minimum of four servers, and I only had three available. I also looked at OrangeFS (originally PVFS2), but it doesn't seem to provide replication, and, in preliminary testing, it was over ten times slower than the alternatives. NFS was tested and its results are included as a baseline.

I once again used compilebench, which was designed to emulate real-life disk usage by creating a kernel tree, reading all the files in the tree, simulating a compile of the tree, running make clean, and finally deleting the tree.

The test was much the same as last time, but with one important difference. Last time, the clients were running on the same machines that were running the servers. LizardFS benefited hugely from this as it has a "prefer local chunkserver" feature that will skip the network completely if there's a copy on the local server. This time around, the clients were run on completely separate machines from the servers, which removed that advantage for LizardFS, but which I believe is a better reflection on how distributed filesystems are generally used.

I would like to quickly note that there was very little speed difference between LizardFS's FUSE2 and FUSE3 clients. The numbers included are from the FUSE3 client, but they only differed by a few percentage points from the FUSE2 client.

A huge thank you to my former employer, the Lebanon Evangelical School for Boys and Girls, for allowing me to use their lab for my test. The test was run on nine machines, three running as servers and six running the clients. The three servers operated as distributed data servers with three replicas per file. Each client machine ran five clients, giving us a simulated 30 clients.

All of the data was stored on XFS partitions on SSDs for speed, except for CephFS, which used an LVM partition with Bluestore. After running the benchmarks with one distributed filesystem, it was shut down and its data deleted, so each distributed filesystem had the same disk space available to it.

The NFS server was setup to export its shares async (for speed). The LizardFS clients used the recommended mount options, while the other clients just used the defaults (the recommended small file options for GlusterFS caused the test to hang). CephFS was mounted using the kernel module rather than the FUSE filesystem.

Before running the 30 clients simultaneously, I ran the test ten times in a row on a single client, to get a single client baseline. So let's look at this performance (click for the full-size chart):


So, apart from the simulated "make clean", CephFS dominated these tests. It even managed to beat out NFS on everything except clean and delete, and delete was within a couple of seconds. LizardFS and GlusterFS were close in most of the tests with LizardFS taking a slight lead. GlusterFS, though, was much slower than the alternatives when it came to the delete test, which is consistent with last year's test.

Next, let's look at multiple-client performance. With these tests, I ran 30 clients simultaneously, and, for the first four tests, summed up their speeds to give me the total speed that the server was giving the clients. Because deletions were running simultaneously, I averaged the time for the final test.


Ok, just wow. If you're reading and writing large numbers of small files, NFS is probably still going to be your best bet. It was over five times faster than the competition in writing and over twice as fast in reading. The compile process is where things started to change, with both CephFS and LizardFS beating NFS, and LizardFS took a huge lead in the clean test and delete test. Interestingly, it took just 50% longer for LizardFS to delete 30 clients' files compared with a single client's files.

After CephFS's amazing performance in the single-client mode, I was looking forward to some incredible results, but it really didn't scale as well as I had hoped, though it was still competitive with the other distributed filesystems. Once again, LizardFS has shown that when it comes to metadata operations, it's really hard to beat, but its aggregate read and write performance were disappointing. And, once again, GlusterFS really struggled with the test. I wish it would have worked with the performance tuning for small files enabled, as we might have seen better results.

21 Jul 2018 12:13pm GMT