14 Dec 2017

feedPlanet GNOME

Christian Kellner: Introducing bolt: Thunderbolt 3 security levels for GNU/Linux

Thunderbolt icon by jimmac


Thunderbolt 3 security levels

Thunderbolt is an I/O technology that can be used to connect external peripherals to a computer - similar to USB and FireWire. It works by bridging PCIe between the controllers on each end of the connection, which in turn means that devices connected via Thunderbolt are ultimately connected via PCIe. Therefore thunderbolt can achieve very high connection speeds, fast enough to even drive external graphics cards. The downside is that it also makes certain attacks possible (e.g. Thunderstrike, DMA attack).
To mitigate these security problems, the latest version - known as Thunderbolt 3 - supports different security levels:

The active security level can normally be selected prior boot via a BIOS option, but it is interesting to note that in the future the none option is likely to go away. This of course means connected thunderbolt devices wont work at all unless they are authorized by the user from with the running operating system.

Intel has added support for the different security levels to the kernel and starting with Linux 4.13. The interface to interact with the devices is via files in sysfs. Since July we have been working on the userspace bits to make Thunderbolt 3 support "just work" 😉. The UX design design was drafted by Jimmac. The solution that we came up with to implement it consists of two parts: a generic system daemon and for GNOME a (new) component in gnome-shell. The latter will use the daemon to automatically authorize new devices. This will happen if and only if the currently active user is an administrator and the session is not locked.

bolt 0.1 Accidentally Working

Today I released the first version 0.1 (aka "Accidentally Working") of bolt, a system daemon that manages Thunderbolt 3 devices. It provides a D-Bus API to list devices, enroll them (authorize and store them in the local database) and forget them again (remove previously enrolled devices). It also emits signals if new devices are connected (or removed). During enrollment devices can be set to be automatically authorized as soon as they are connected. A command line tool, called boltctl, can be used to control the daemon and perform all the above mentioned tasks (see the man page of boltctl(1) for details).

I hope that other desktop environments will also find that daemon useful and use it. As this is not a stable release yet, we still have room for API changes, so feedback is welcome.

botlctl example output


New software needs testers, so everybody who has a computer with Thunderbolt 3 and feels courageous enough is welcome to give it a try. I created a copr with builds for Fedora 27 & rawhide and Jaroslav created a PKGBUILD file so Arch users can find it already in the AUR. As this is very fresh software it will contain bugs and those can be filed at the issue tracker of the github repo.

what's next: gnome-shell integration

I am locally running a Proof-of-Concept gnome-shell extension that implements the user session bits to complete the aforementioned : It uses bolt's D-Bus interface and listens for new Thunderbolt devices and then enrolls them, if the user is logged in. Since it can take a while until all the devices that are attached via thunderbolt are properly connected the daemon has a Probing property that is used to display a little icon as a way to inform the user that something is happening on the thunderbolt bus. All of this is already working quite well here on my test machine. In the next few days (weeks?) I will be working on integrating that code into gnome-shell. There are a few open UX questions that need to be addressed, but all in all things looking good.


thunderbolt activity indicator (aka cable snake)


Special thanks to Alberto Ruiz, Benjamin Berg, Hans de Goede, Harald Hoyer, Javier Martinez Canillas, Jaroslav Lichtblau, Jakub Steiner, Richard Hughes who all helped and supported this project during the last few months! ❤️

14 Dec 2017 1:05pm GMT

feedPlanet KDE

Elementary LibreOffice

ElementaryIcons.png

Two months ago I start to finalize the existing Elementary icon theme for LibreOffice. It's about 2.000 icons and now they are available in LibreOffice 6.0 beta. In addition all icons are available as svg file so it can be used and edit in an easy way.

Please download and test LibreOffice 6.0 beta to give feedback. You switch the icon theme with Tools -> Options -> View -> Icon Style. We talk about a lot icons not all are perfect. Feedback is always welcome.

Test LibreOffice 6.0 beta

Mary Christmas and an shiny new year with LibreOffice 6.0.


14 Dec 2017 9:10am GMT

13 Dec 2017

feedPlanet GNOME

Federico Mena-Quintero: Librsvg moves to Gitlab

Librsvg now lives in GNOME's Gitlab instance. You can access it here.

Gitlab allows workflows similar to Github: you can create an account there, fork the librsvg repository, file bug reports, create merge requests... Hopefully this will make it nicer for contributors.

In the meantime, feel free to take a look!

This is a huge improvement for GNOME's development infrastructure. Thanks to Carlos Soriano, Andrea Veri, Philip Chimento, Alberto Ruiz, and all the people that made the move to Gitlab possible.

13 Dec 2017 8:09pm GMT

feedPlanet KDE

Writing a Custom Qt 3D Aspect – part 2

Introduction

In the previous article we gave an overview of the process for creating a custom aspect and showed how to create (most of) the front end functionality. In this article we shall continue building our custom aspect by implementing the corresponding backend types, registering the types and setting up communication from the frontend to the backend objects. This will get us most of the way there. The next article will wrap up by showing how to implement jobs to process our aspect's components.

As a reminder of what we are dealing with, here's the architecture diagram from part 1:

Creating the Backend

One of the nice things about Qt 3D is that it is capable of very high throughput. This is achieved by way of using jobs executed on a threadpool in the backend. To be able to do this without introducing a tangled web of synchronisation points (which would limit the parallelism), we make a classic computer science trade-off and sacrifice memory for the benefit of speed. By having each aspect work on its own copy of the data, it can schedule jobs safe in the knowledge that nothing else will be trampling all over its data.

This is not as costly as it sounds. The backend nodes are not derived from QObject. The base class for backend nodes is Qt3DCore::QBackendNode, which is a pretty lightweight class. Also, note that aspects only store the data that they specifically care about in the backend. For example, the animation aspect does not care about which Material component an Entity has, so no need to store any data from it. Conversely, the render aspect doesn't care about Animation clips or Animator components.

In our little custom aspect, we only have one type of frontend component, FpsMonitor. Logically, we will only have a single corresponding backend type, which we will imaginatively call FpsMonitorBackend:

[sourcecode lang="cpp" title="fpsmonitorbackend.h"]
class FpsMonitorBackend : public Qt3DCore::QBackendNode
{
public:
FpsMonitorBackend()
: Qt3DCore::QBackendNode(Qt3DCore::QBackendNode::ReadWrite)
, m_rollingMeanFrameCount(5)
{}

private:
void initializeFromPeer(const Qt3DCore::QNodeCreatedChangeBasePtr &change) override
{
// TODO: Implement me!
}

int m_rollingMeanFrameCount;
};
[/sourcecode]

The class declaration is very simple. We subclass Qt3DCore::QBackendNode as you would expect; add a data member to mirror the information from the frontend FpsMonitor component; and override the initializeFromPeer() virtual function. This function will be called just after Qt 3D creates an instance of our backend type. The argument allows us to get at the data sent from the corresponding frontend object as we will see shortly.

Registering the Types

We now have simple implementations of the frontend and backend components. The next step is to register these with the aspect so that it knows to instantiate the backend node whenever a frontend node is created. Similarly for destruction. We do this by way of an intermediary helper known as a node mapper.

To create a node mapper, just subclass Qt3DCore::QNodeMapper and override the virtuals to create, lookup and destroy the backend objects on demand. The manner in which you create, store, lookup and destroy the objects is entirely up to you as a developer. Qt 3D does not impose any particular management scheme upon you. The render aspect does some fairly fancy things with bucketed memory managers and aligning memory for SIMD types, but here we can do something much simpler.

We will store pointers to the backend nodes in a QHash within the CustomAspect and index them by the node's Qt3DCore::QNodeId. The node id is used to uniquely identify a given node, even between the frontend and all available aspect backends. On Qt3DCore::QNode the id is available via the id() function, whereas for QBackendNode you access it via the peerId() function. For the two corresponding objects representing the component, the id() and peerId() functions return the same QNodeId value.

Let's get to it and add some storage for the backend nodes to the CustomAspect along with some helper functions:

[sourcecode lang="cpp" title="customaspect.h"]
class CustomAspect : public Qt3DCore::QAbstractAspect
{
Q_OBJECT
public:
...
void addFpsMonitor(Qt3DCore::QNodeId id, FpsMonitorBackend *fpsMonitor)
{
m_fpsMonitors.insert(id, fpsMonitor);
}

FpsMonitorBackend *fpsMonitor(Qt3DCore::QNodeId id)
{
return m_fpsMonitors.value(id, nullptr);
}

FpsMonitorBackend *takeFpsMonitor(Qt3DCore::QNodeId id)
{
return m_fpsMonitors.take(id);
}
...

private:
QHash<Qt3DCore::QNodeId, FpsMonitorBackend *> m_fpsMonitors;
};
[/sourcecode]

Now we can implement a simple node mapper as:

[sourcecode lang="cpp" title="fpsmonitorbackend.h"]
class FpsMonitorMapper : public Qt3DCore::QBackendNodeMapper
{
public:
explicit FpsMonitorMapper(CustomAspect *aspect);

Qt3DCore::QBackendNode *create(const Qt3DCore::QNodeCreatedChangeBasePtr &change) const override
{
auto fpsMonitor = new FpsMonitorBackend;
m_aspect->addFpsMonitor(change->subjectId(), fpsMonitor);
return fpsMonitor;
}

Qt3DCore::QBackendNode *get(Qt3DCore::QNodeId id) const override
{
return m_aspect->fpsMonitor(id);
}

void destroy(Qt3DCore::QNodeId id) const override
{
auto fpsMonitor = m_aspect->takeFpsMonitor(id);
delete fpsMonitor;
}

private:
CustomAspect *m_aspect;
};
[/sourcecode]

To finish this piece of the puzzle, we now need to tell the aspect about how these types and the mapper relate to each other. We do this by calling QAbstractAspect::registerBackendType() template function, passing in a shared pointer to the mapper that will create, find, and destroy the corresponding backend nodes. The template argument is the type of the frontend node for which this mapper should be called. A convenient place to do this is in the constructor of the CustomAspect. In our case it looks like this:

[sourcecode lang="cpp" title="customaspect.cpp"]
CustomAspect::CustomAspect(QObject *parent)
: Qt3DCore::QAbstractAspect(parent)
{
// Register the mapper to handle creation, lookup, and destruction of backend nodes
auto mapper = QSharedPointer<FpsMonitorMapper>::create(this);
registerBackendType<FpsMonitor>(mapper);
}
[/sourcecode]

And that's it! With that registration in place, any time an FpsMonitor component is added to the frontend object tree (the scene), the aspect will lookup the node mapper for that type of object. Here, it will find our registered FpsMonitorMapper object and it will call its create() function to create the backend node and manage its storage. A similar story holds for the destruction (technically, it's the removal from the scene) of the frontend node. The mapper's get() function is used internally to be able to call virtuals on the backend node at appropriate points in time (e.g. when properties notify that they have been changed).

The Frontend-Backend Communications

Now that we are able to create, access and destroy the backend node for any frontend node, let's see how we can let them talk to each other. There are 3 main times the frontend and backend nodes communicate with each other:

  1. Initialisation - When our backend node is first created we get an opportunity to initialise it with data sent from the frontend node.
  2. Frontend to Backend - Typically when properties on the frontend node get changed we want to send the new property value to the backend node so that it is operating on up to date information.
  3. Backend to Frontend - When our jobs process the data stored in the backend nodes, sometimes this will result in updated values that should be sent to the frontend node.

Here we will cover the first two cases. The third case will be deferred until the next article when we introduce jobs.

Backend Node Initialisation

All communication between frontend and backend objects operates by sending sub-classed Qt3DCore::QSceneChanges. These are similar in nature and concept to QEvent but the change arbiter that processes the changes has the opportunity to manipulate them in the case of conflicts from multiple aspects, re-order them into priority, or any other manipulations that may be needed in the future.

For the purpose of initialising the backend node upon creation, we use a Qt3DCore::QNodeCreatedChange which is a templated type that we can use to wrap up our type-specific data. When Qt 3D wants to notify the backend about your frontend node's initial state, it calls the private virtual function QNode::createNodeCreationChange(). This function returns a node created change containing any information that we wish to access in the backend node. We have to do it by copying the data rather than just dereferencing a pointer to the frontend object because by the time the backend processes the request, the frontend object may have been deleted - i.e. a classic data race. For our simple component our implementation looks like this:

[sourcecode lang="cpp" title="fpsmonitor.h"]
struct FpsMonitorData
{
int rollingMeanFrameCount;
};
[/sourcecode]

[sourcecode lang="cpp" title="fpsmonitor.cpp"]
Qt3DCore::QNodeCreatedChangeBasePtr FpsMonitor::createNodeCreationChange() const
{
auto creationChange = Qt3DCore::QNodeCreatedChangePtr<FpsMonitorData>::create(this);
auto &data = creationChange->data;
data.rollingMeanFrameCount = m_rollingMeanFrameCount;
return creationChange;
}
[/sourcecode]

The change created by our frontend node is passed to the backend node (via the change arbiter) and gets processed by the initializeFromPeer()
virtual function
:

[sourcecode lang="cpp" title="fpsmonitorbackend.cpp"]
void FpsMonitorBackend::initializeFromPeer(const Qt3DCore::QNodeCreatedChangeBasePtr &change)
{
const auto typedChange = qSharedPointerCast<Qt3DCore::QNodeCreatedChange<FpsMonitorData>>(change);
const auto &data = typedChange->data;
m_rollingMeanFrameCount = data.rollingMeanFrameCount;
}
[/sourcecode]

Frontend to Backend Communication

At this point, the backend node mirrors the initial state of the frontend node. But what if the user changes a property on the frontend node? When that happens, our backend node will hold stale data.

The good news is that this is easy to handle. The implementation of Qt3DCore::QNode takes care of the first half of the problem for us. Internally it listens to the Q_PROPERTY notification signals and when it sees that a property has changed, it creates a QPropertyUpdatedChange for us and dispatches it to the change arbiter which in turn delivers it to the backend node's sceneChangeEvent() function.

So all we need to do as authors of the backend node is to override this function, extract the data from the change object and update our internal state. Often you will then want to mark the backend node as dirty in some way so that the aspect knows it needs to be processed next frame. Here though,
we will just update the state to reflect the latest value from the frontend:

[sourcecode lang="cpp" title="fpsmonitorbackend.cpp"]
void FpsMonitorBackend::sceneChangeEvent(const Qt3DCore::QSceneChangePtr &e)
{
if (e->type() == Qt3DCore::PropertyUpdated) {
const auto change = qSharedPointerCast<Qt3DCore::QPropertyUpdatedChange>(e);
if (change->propertyName() == QByteArrayLiteral("rollingMeanFrameCount")) {
const auto newValue = change->value().toInt();
if (newValue != m_rollingMeanFrameCount) {
m_rollingMeanFrameCount = newValue;
// TODO: Update fps calculations
}
return;
}
}
QBackendNode::sceneChangeEvent(e);
}
[/sourcecode]

If you don't want to use the built in automatic property change dispatch of Qt3DCore::QNode then you can disable it by wrapping the property notification signal emission in a call to QNode::blockNotifications(). This works in exactly the same manner as QObject::blockSignals() except that it only blocks sending the notifications to the backend node, not the signal itself. This means that other connections or property bindings that rely upon your signals will still work.

If you block the default notifications in this way, then you need to send your own to ensure that the backend node has up to date information. Feel free to subclass any class in the Qt3DCore::QSceneChange hierarchy and bend it to your needs. A common approach is to subclass Qt3DCore::QStaticPropertyUpdatedChangeBase,
which handles the property name and in the subclass add a strongly typed member for the property value payload. The advantage of this over the built-in mechanism is that it avoids using QVariant which does suffer a little in highly threaded contexts in terms of performance. Usually though, the frontend properties don't change too frequently and the default is fine.

Summary

In this article we have shown how to implement most of the backend node; how to register the node mapper with the aspect to create, lookup and destroy backend nodes; how to initialise the backend node from the frontend node in a safe way and also how to keep its data in sync with the frontend.

In the next article we will finally make our custom aspect actually do some real (if simple) work, and learn how to get the backend node to send updates to the frontend node (the mean fps value). We will ensure that the heavy lifting parts get executed in the context of the Qt 3D threadpool so that you get an idea of how it can scale. Until next time. continue reading

The post Writing a Custom Qt 3D Aspect - part 2 appeared first on KDAB.

13 Dec 2017 10:49am GMT

12 Dec 2017

feedPlanet KDE

Namaste ! (on the road to Swatantra 2017)

This is a little blog post from India. I've been invited to give not one, but two talks at Swatantra 2017, the triennial conference organised by ICFOSS in Thiruvananthapuram (also known by its shorter old name, Trivandrum), Kerala.

I'll have the pleasure to give a talk about GCompris, and another one about Synfig studio. It's been a long time since I didn't talk about the latter, but since Konstantin Dmitriev and the Morevna team were not available, I'll do my best to represent Synfig there.



Your browser does not support the video tag or the webm video format.

(little teaser animation of the event banner, done with Synfig studio)

I'll also meet some friends from Krita, David Revoy and Raghavendra Kamath, so even if there is no talk dedicated to Krita, it should be well represented.

The event will happen the 20th and 21st of December, and my talks will be on the second day. Until then, I'm spending a week visiting and enjoying the south of India.

You can find more info on the official website of the event: swatantra.net.in. Many thanks again to the nice organization team at ICFOSS for the invitation !

12 Dec 2017 12:06pm GMT

feedPlanet GNOME

Jehan Pagès: New format in GIMP: HGT

Lately a recurrent contributor to the GIMP project (Massimo Valentini) contributed a patch to support HGT files. From this initial commit, since I found this data quite cool, I improved the support a bit (auto-detection of the variants and special-casing in particular, as well as making an API for scripts).

So what is HGT? That's topography data basically just containing elevation in meters of various landscape (HGT stands for "height"), gathered by the Shuttle Radar Topography Mission (SRTM) run by various space agencies (NASA, National Geospatial-Intelligence Agency, German and Italian space agencies…). To know more, you can read here and there.
HGT download source: https://dds.cr.usgs.gov/srtm/version2_1/
(go inside SRTM1/ and SRTM3/ directories for respectively 1 arc-second and 3 arc-seconds sampled data)
You probably won't find other links of interest since not everyone can do such data (not everyone has satellites!).

Here is what it can look like after processing: left is an image obtained from NASA PDF, and right is the same data imported in GIMP followed by a gradient mapping.

So the support is not perfect yet because to get a nice looking result, you need to do it in several steps and that involves likely a bunch of tweaking. My output above is not that good (colors look a bit radioactive compared to the NASA one!) but that's mostly because I didn't take the time to tweak more.

And so that's why I am writing this blog post. Someone trying to import HGT files in GIMP may be a bit disconcerted at first (so I'm hoping you'll find this blog post to go further). At first you'd likely get a nearly uniform-looking grey image and may think that HGT import is broken. It is not.

What's happening? Why is the imported HGT all uniform grey?

GIMP by default will convert the HGT data into greyscale. That is not a problem by itself since we can have very well contrasted greys. But that doesn't happen for HGT import. Why?

HGT contains elevation data as signed 16-bit integers representing meters. In other words, it represents elevation from -32767 m to 32767 m (with an exception for -32768 which means "void", i.e. invalid data; since that's raw data with minimum processing, it can indeed contain errors). Therefore once mapped to [0; 1] range, color 0 (pure black) is invalid, ]0; 0.5] is anything under water level and [0.5; 1] is above water elevation.

Considering that on earth, the highest point is Mount Everest at 8848m, when mapped to our [0; 1] range, we see it has value 0.635. So you can see the problem: most things on earth will be represented with greys really close to 0.5 and that's why there is no contrast.

How to get nice colors and contrast?

There are several solutions, but the one proposed by the contributor was to use the "Gradient Map" plug-in. That's a good idea. Basically you remap your greys from 0 to 1 into color gradients.
Now you can try to create a gradient by setting random stops through the GUI, but that will most likely be quite a challenge. A better idea is to do it a bit more "scientifically" (i.e. to use numbers, which you can also do through the GUI by using the new blend tool, though not as accurately as I'd like with only 2 decimal places). This is what did Massimo here by creating a gradient file which would map "magenta for invalid data, blue below zero, green to 1000 m, yellow to 2000m, and gray to white above". From this base, I added a bit of random tweaking because I was trying to get an output similar to the NASA document (just for the sake of it), so you can get a look at how my own gradient file looks like. But if you are looking to, say, create a relief map with accurate elevation/color mapping, you'd prefer to stick by the number-only approach.

Then once you get your gradient "code", copy it in a file with the extension .ggr inside the gradients/ folder of your GIMP config, and just use it when running "Gradient Map" filter.

Just to explain a bit the format: for each line, you get the startpoint, midpoint and endpoint coordinates (in the [0; 1] range), followed by 4 values for RGBA (also in [0; 1] range) for the startpoint then again 4 values for RGBA endpoint color. Then you get an integer for the blending mode (you likely want to keep it linear, i.e. 0, for a relief map), then the coloring value (leave it to 0 as well, which is RGB). Finally the last 2 integers are whether the startpoint and endpoint must be fixed colors, or correspond to foreground, background, etc. You will likely want to keep them as fixed colors (0).

So basically a line like this:

0.500000 0.507633 0.515267 0.000000 1.000000 0.000000 1.000000 0.000000 0.500000 0.000000 1.000000 0 0 0 0

means: gradient from 0 meter (0.5) to 1000 m ((0.515267 - 0.5) × 216 ≈ 1000) is a linear gradient from RGBA 0-1-0-1 (green) to RGBA 0-0.5-0-1. That is:

start mid end Rs Gs Bs As Re Ge Be Ae 0 0 0 0

where start is the start elevation and end the end elevation in [0; 1] range; and RsGsBsAs and ReGeBeAe are respectively the start and end gradient colors.

That's how you can easily map the elevation into colors! I hope that's clear! 🙂

Can't we have nicer support with a GUI?

Yes of course. This was fun and cool to review then improve this feature, and we should not let quality patches rot in our bugtracker, but that's not my priority (as you know) so I stopped improving the feature (if I don't stop myself from all these funny stuff out there, when would I work on ZeMarmot?!).
I gladfully accept new patches to improve the support and have left myself 2 bug reports to leave ideas about how to improve the current tools:

In the meantime, I leave this blog post so that the format is at least understandable and HGT import usable to moderately technical people. 🙂

That's it! Hopefully this post will be useful to someone needing to process HGT files with GIMP and willing to understand how this works, until we get more intuitive support.

Reminder: my Free Software coding can be funded on
Liberapay, Patreon or Tipeee through ZeMarmot project.

12 Dec 2017 4:09am GMT

11 Dec 2017

feedplanet.freedesktop.org

Eric Anholt: 2017-12-11

It's been a while since I posted a TWIV update, so this one will be big:

For VC5 GL features:

While running DEQP tests on all this (which unfortunately don't complete yet due to running out of memory on my 7268 without swap), I've also rebased my Vulkan series and started on implementing image layout for it.

I also tested Timothy Arceri's gallium NIR linking pass. The goal of that is to pack and dead-code eliminate varyings up in shared code. It's a net ~0 effect on vc4 currently, but it will help vc5, and I may be able to dead-code eliminate some of the vc4 compiler backend now that the IR coming in to the driver is cleaner.

On the VC4 front, Boris has posted a series for performance counter support. This was a pretty big piece of work, and our hope is that with the addition of performance counters we'll be able to dig into those workloads where vc4 is slower than the closed driver and actually fix them. Unfortunately he hasn't managed to build frameretrace yet, so we haven't really tested it on its final intended workload.

For VC4 GL, I did a bit of work on minetest performance, improving the game's fps from around 15 to around 17. Its desktop GL renderer is really unfortunate, using a lot of immediate-mode GL, but I was completely unable to get its GLES renderer branch to build. It also lacks a reproducable/scriptable benchmark mode, so most of my testing was against an apitrace, which is very hard to get useful performance data from.

I debugged a crash in vc4 with large vertex counts that a user had reported, landed a fix for a kernel memory leak, and landed Dave Stevenson's HVS format support (part of his work on getting video decode into vc4 GL).

Finally, I did a bit of research and work to help unblock Dave Stevenson's unicam driver (the open source camera driver). Now that we have an ack for the DT binding, we should be able to get it merged for 4.16!

11 Dec 2017 12:30am GMT

06 Dec 2017

feedplanet.freedesktop.org

Bastien Nocera: UTC and Anywhere on Earth support

A quick post to tell you that we finally added UTC support to Clocks' and the Shell's World Clocks section. And if you're into it, there's also Anywhere on Earth support.

You will need to have git master versions of libgweather (our cities and timezones database), and gnome-clocks. This feature will land in GNOME 3.28.



Many thanks to Giovanni for coming up with an API he was happy with after I attempted a couple of iterations on one. Enjoy!

Update: As expected, a bug crept in. Thanks to Colin Guthrie for spotting the error in the "Anywhere on Earth" timezone. See this section for the fun we have to deal with.

06 Dec 2017 2:32pm GMT

28 Nov 2017

feedplanet.freedesktop.org

Robert Foss: Building ChromiumOS for Qemu

Alt text

So let's start off by covering how ChromiumOS relates to ChromeOS. The ChromiumOS project is essentially ChromeOS minus branding and some packages for things like the media digital restrictions management.

But on the whole, almost everything is there, and the pieces that aren't, you don't need.

ChromiumOS

Depot tools

In order to check out ChromiumOS and other large Google projects, you'll need depot tools.

git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
export PATH=$PATH:$(PWD)/depot_tools

Maybe you'd want to add the PATH export to your .bashrc.

Building ChromiumOS

mkdir chromiumos
cd chromiumos
repo init -u https://chromium.googlesource.com/chromiumos/manifest.git --repo-url https://chromium.googlesource.com/external/repo.git [-g minilayout]
repo sync -j75
cros_sdk
export BOARD=amd64-generic
./setup_board --board …

28 Nov 2017 10:32am GMT