28 May 2020

feedPlanet Grep

Frank Goossens: Google PageSpeed Insights updated, new metrics and recommendations!

If you tested your blog's performance on Google PageSpeed Insights yesterday and do so again today, you might be in for a surprise with a lower score even if not one byte (letter) got changed on your site. The reason: Google updated PageSpeed Insights to Lighthouse 6, which changes the KPI's (the lab data metrics) that are reported, adds new opportunities and recommendations and changes the way the total score is calculated.

So all starts with the changed KPI's in the lab metrics really; whereas up until yesterday First Contentful Paint, Speed Index, Time to Interactive, First Meaningful Paint, First CPU Idle and First input delay were measured, the last 3 ones are now not shown any more, having been replaced by:

The total score is calculated based on all 6 metrics, but the weight of the 3 "old" ones (FCP, SI, TTI) is significantly lowered (from 80 to 45%) and the new LCP & TBT account for a whopping 50% of your score (CLS is only 5%).

Lastly some one very interesting opportunity and two recommendations I noticed;

Summary: Google Pagespeed Insights changed a lot and it forces performance-aware users to stay on their toes. Especially sites with lots of (3rd party) JavaScript might want to reconsider some of the tools used.

Possibly related twitterless twaddle:

28 May 2020 11:36am GMT

Xavier Mertens: [SANS ISC] Flashback on CVE-2019-19781

I published the following diary on isc.sans.edu: "Flashback on CVE-2019-19781":

First of all, did you know that the Flame malware turned 8 years today! Happy Birthday! This famous malware discovered was announced on May 28th, 2012. The malware was used for targeted cyber espionage activities in the Middle East area. If this malware was probably developed by a nation-state organization. It infected a limited amount of hosts (~1000 computers) making it a targeted attack… [Read more]

[The post [SANS ISC] Flashback on CVE-2019-19781 has been first published on /dev/random]

28 May 2020 11:31am GMT

25 May 2020

feedPlanet Grep

Mattias Geniar: What else can you stuff in a certificate chain?

I recently learned that quite a few (old) root certificates are going to expire, and many websites still send those along in the TLS handshake.

25 May 2020 12:00am GMT

23 May 2020

feedPlanet Grep

Xavier Mertens: [SANS ISC] AgentTesla Delivered via a Malicious PowerPoint Add-In

I published the following diary on isc.sans.edu: "AgentTesla Delivered via a Malicious PowerPoint Add-In":

Attackers are always trying to find new ways to deliver malicious code to their victims. Microsoft Word and Excel are documents that can be easily weaponized by adding malicious VBA macros. Today, they are one of the most common techniques to compromise a computer. Especially because Microsoft implemented automatically executed macros when the document is opened. In Word, the macro must be named AutoOpen(). In Excel, the name must be Workbook_Open(). However, PowerPoint does not support this kind of macro. Really? Not in the same way as Word and Excel do… [Read more]

[The post [SANS ISC] AgentTesla Delivered via a Malicious PowerPoint Add-In has been first published on /dev/random]

23 May 2020 12:14pm GMT

21 May 2020

feedPlanet Grep

Xavier Mertens: [SANS ISC] Malware Triage with FLOSS: API Calls Based Behavior

I published the following diary on isc.sans.edu: "Malware Triage with FLOSS: API Calls Based Behavior":

Malware triage is a key component of your hunting process. When you collect suspicious files from multiple sources, you need a tool to automatically process them to extract useful information. To achieve this task, I'm using FAME which means "FAME Automates Malware Evaluation". This framework is very nice due to the architecture based on plugins that you can enable upon your needs. Here is an overview of my configuration… [Read more]

[The post [SANS ISC] Malware Triage with FLOSS: API Calls Based Behavior has been first published on /dev/random]

21 May 2020 10:02am GMT

19 May 2020

feedPlanet Grep

Fabian Arrotin: Deploying OpenShift 4 on bare-metal and disabling dhcp

Recently I had to work with one of my colleagues (David) on something that was new to me : Openshift. I never really looked at OpenShift but knew the basic concepts, at least on OKD 3.x.

With 4.x, OCP is completely different as instead of deploying "normal" Linux distro (like CentOS in our case), it's now using RHCOS (so CoreOS) as it's foundation. The goal of this blog post is not to dive into all the technical steps required to deploy/bootstrap the openshift cluster, but to discuss of one particular 'issue' that I found myself annoying while deploying: how to disable dhcp on the CoreOS provisioned nodes.

To cut a long story short, you can read the basic steps needed to deploy Openshift on bare-metal in the official doc

Have you read it ? Good, now we can move forward :)

After we had configured our install-config.yaml (with our needed values) and also generated the manifests with openshift-install create manifests --dir=/path/ we thought that it would be just deploying with the ignition files built by the openshift-install create ignition-configs --dir=/path step (see in the above doc for all details)

It's true that we ended up with some ignition files like:

Those ignition files are (more or less) like traditional kickstart files to let you automate the RHCOS deploy on bare-metal. The other part is really easy, as it's a matter (with ansible in our case) to just configure the tftp boot argument, and call an ad-hoc task to remotely force a physical reinstall of the machine (through ipmi):

So we kicked off first the bootstrap node (ephemeral node being used as a temporary master, from which the real master forming the etcd cluster will get their initial config from), but then we realized that, while RHCOS was installed and responding with the fixed IP we set through pxeboot kernel parameters (and correctly applied on the reboot), each RHCOS node was also trying by default to activate all present NICs on the machine.

That was suddenly "interesting" as we don't fully control the network where those machines are, and each physical node has 4 NICs, all in the same vlan , in which we have also a small dhcp range for other deployments. Do you see the problem about etcd and members in the same subnet and multiple IP addresses ? yeah, it wasn't working as we saw some requests coming from the dhcp interfaces instead of the first properly configured NIC in each system.

The "good" thing is that you can still ssh into each deployed RHCOS (even if not adviced to) , to troubleshoot this. We discovered that RHCOS still uses NetworkManager but that default settings would be to enable all NICs with DHCP if nothing else declared which is what we need to disable.

After some research and help from Colin Walters, we were pointed to this bug report for coreos

With the traditional "CentOS Linux" sysadmin mindset, I thought : "good, we can just automate with ansible ssh'ing into each provisioned rhcos to just disable it", but there should be a clever other way to deal with this, as it was also impacting our initial bootstrap and master nodes (so no way to get cluster up)

That's then that we found this : Customing deployment with Day0 config : here is a simple example for Chrony

That's how I understood the concept of MachineConfig and how that's then supposed to work for a provisioned cluster, but also for the bootstrap process. Let's so use those informations to create what we need and start a fresh deploy.

Assuming that we want to create our manifest in :

openshift-install create manifests --dir=/<path>/

And now that we have manifests, let's inject our machine configs : You'll see that because it's YAML all over the place, injecting Yaml in Yaml would be "interesting" so the concept here is to inject content as base64 encoded string, everywhere.

Let's suppose that we want the /etc/NetworkManager.conf.d/disabledhcp.conf having this content on each provisioned node (master and worker) to tell NetworkManager to not default to auto/dhcp:

[main]
no-auto-default=*

Let's first encode it to base64:

/etc/NetworkManager.conf.d/disabledhcp.conf
cat << EOF | base64
[main]
no-auto-default=*
EOF

Our base64 value is W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==

So now that we have content, let's create manifests to create automatically that file at provisioning time :

pushd <path>
# To ensure that provisioned master will try to become master as soon as they are installed
sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml

pushd openshift
for variant in master worker; do 
cat << EOF > ./99_openshift-machineconfig_99-${variant}-nm-nodhcp.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: ${variant}
  name: nm-${variant}-nodhcp
spec:
  config:
    ignition:
      config: {}
      security:
        tls: {}
      timeouts: {}
      version: 2.2.0
    networkd: {}
    passwd: {}
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==
          verification: {}
        filesystem: root
        mode: 0644
        path: /etc/NetworkManager/conf.d/disabledhcp.conf
  osImageURL: ""
EOF

done
popd
popd

I think this snipped is pretty straight-forward, and you see in the source how we "inject" the content of the file itself (previous base64 value we got in previous step)

Now that we have added our customizations, we can just proceed with the openshift-install create ignition-configs --dir=/<path> command again, retrieve our .ign file, and call ansible again to redeploy the nodes, and this time they were deployed correctly with only the IP coming from ansible inventory and no other nic in dhcp.

And also that it works, deploying/adding more workers node in the OCP cluster is just a matter to calling ansible and physical nodes are deployed in a matter of ~5minutes (as RHCOS is just extracting its own archive on disk and reboot)

I don't know if I'll have to take multiple deep dives into OpenShift in the future , but at least I learned multiple things, and yes : you always learn more when you have to deploy something for the first time and that it doesn't work straight away .. so while you try to learn the basics from official doc, you have also to find other resources/docs elsewhere :-)

Hope that it can help people in the same situation when having to deploy OpenShift on premises/bare-metal.

19 May 2020 10:00pm GMT

15 May 2020

feedPlanet Grep

Dries Buytaert: The power of Open Source in the fight against COVID-19

As someone who has spent his entire career in Open Source, I've been closely following how Open Source is being used to fight the COVID-19 global pandemic.

I recently moderated a panel discussion on how Open Source is being used with regards to the coronavirus crisis. Our panel included: Jim Webber (Chief Scientist at Neo4J), Ali Ghodsi (CEO at Databricks), Dan Eiref (Senior Director of Product management at Markforged) and Debbie Theobold (CEO at Vecna Robotics). Below are some of the key takeaways from our discussion. They show how Open Source is a force for good in these uncertain times.

Open Source enables knowledge sharing

Providing accurate information related to COVID-19 is an essential public service. Neo4J worked with data scientists and researchers to create CovidGraph. It is an Open Source graph database that brings together information on COVID-19 from different sources.

Jim Webber from Neo4J explained, The power of graph data [distributed via an open source management system] is that it can pull together disparate datasets from medical practitioners, public health officials and other scientific publications into one central view. People can then make connections between all facts. This is useful when looking for future long-term solutions.. CovidGraph helped institutions like the Canadian government integrate data from multiple departments and facilities.

Databricks CEO Ali Ghodsi also spoke to his company's efforts to democratize data and artificial intelligence. Their mission is to help data teams solve the world's toughest problems. Databricks created Glow, an Open Source toolkit built on Apache Spark that enables large-scale genomic analysis. Glow helps scientists understand the development and spread of the COVID-19 virus. Databricks made their datasets available for free. Using Glow's machine learning tools, scientists are creating predictive models that track the spread of COVID-19.

Amid the positive progress we're seeing from this open approach to data, some considerations were raised about governments' responsibilities with the data they collect. Maintaining public trust is always a huge concern. Still, as Ali said, The need for data is paramount. This isn't a matter of using data to sell ads; it's a matter of using data to data to save lives..

Open Source makes resources accessible on a global scale

It's been amazing to watch how Open Source propels innovation in times of great need. Dan Eiref from 3D printer company Markforged spoke to how his company responded to the call to assist in the pandemic. Markforged Open Sourced the design for face masks and nasal swabs. They also partnered with doctors to create a protective face shield and distributed personal protective equipment (PPE) to more than 500 hospitals.

Almost immediately we got demand from more than 10,000 users to replicate this design in their own communities, as well as requests to duplicate the mask on non-Markforged printers. We decided to Open Source the print files so anyone could have access to these protections., said Eiref.

The advantage of Open Source is that it can quickly produce and distribute solutions to people who need it the most. Debbie Theobold, CEO of Vecna Robotics, shared how her company helped tackle the shortage of ventilators. Since COVID-19 began, medical manufacturers have struggled to provide enough ventilators, which can cost upwards of $40,000. Venca Robotics partnered with the Massachusetts Institute of Technology (MIT) to develop an Open Source ventilator design called Ventiv, a low-cost alternative for emergency ventilation. The rapid response from people to come together and offer solutions demonstrates the altruistic pull of the Open Source model to make a difference., said Theobald.

Of course, there are still challenges for Open Source in the medical field. In the United States, all equipment requires FDA certification. The FDA isn't used to Open Source, and Open Source isn't used to dealing with FDA certification either. Fortunately, the FDA has adjusted its process to help make these designs available more quickly.

Open Source accelerates digital transformations

A major question on everyone's mind was how technology will affect our society post-pandemic. It's already clear that long-term trends like online commerce, video conferencing, streaming services, cloud adoption and even Open Source are all being accelerated as a result of COVID-19. Many organizations need to innovate faster in order to survive. Responding to long-term trends by slowly adjusting traditional offerings is often "too little, too late".

For example, Debbie Theobold of Vecna Robotics brought up how healthcare organizations can see greater success by embracing websites and mobile applications. These efforts for better, patient-managed experiences that were going to happen eventually are happening right now. We've launched our mobile app and embraced things like online pre-registration. Companies that were relying on in-person interactions are now struggling to catch up. We've seen that technology-driven interactions are a necessity to keeping patient relationships., she said.

At Acquia, we've known for years that offering great digital experiences is a requirement for organizations looking to stay ahead.

In every crisis, Open Source has empowered organizations to do more with less. It's great to see this play out again. Open Source teams have rallied to help and come up with some pretty incredible solutions when times are tough.

15 May 2020 4:52pm GMT

Kristof Willen: Inmotion V10

Toys

So I got my Xiaomi M365 e-scooter a few months, and it quickly started to show quite some disadvantages. The most annoying was the weak motor : going up some long hills quickly forced me to step off as the e-scooter came to a grinding halt. The autonomy was low which required a daily charging session of 4 hours. Another issue was the bulky form factor which made the transportation on the train a bit cumbersome. And last but not least : an e-scooter still looks like a childs toy. I know I'm a grown-up child, but that doesn't mean I want to shout it out to everyone.

In the mean time, I've encountered some information on monowheels: they are single wheeled devices with pedals on the side. It looks quite daunting to use one, but when I received my Inmotion V10, I was immediately sold. This kind of device is really revolutionary : powerfull motor, great range and looks. It is compact enough to easily take it on the public transport, and has a maximum speed of 40 kph.

It however took me quite a few days to learn to ride this thing : only after a week with a daily exercise session of half an hour, things finally 'clicked' inside my head, and a week later, I found myself confident enough to ride in traffic. So a steep learning curve indeed, but when you persist, the reward is immense : riding this thing feels like you're flying !

15 May 2020 1:38pm GMT

Mattias Geniar: MySQL: ERROR 1153 (08S01): Got a packet bigger than 'max_allowed_packet' bytes

I ran into this error when doing a very large MySQL import from a dumpfile.

15 May 2020 12:00am GMT

14 May 2020

feedPlanet Grep

Mattias Geniar: Create a date in the future for use in Bash scripts on BSD/Mac OSX

Annoyingly, the date command differs vastly between Linux & BSD systems. Mac, being based on BSD, inherits the BSD version of that date command.

14 May 2020 12:00am GMT

11 May 2020

feedPlanet Grep

Dries Buytaert: We raised $500,000!

Blue hearts

I'm excited to announce that the Drupal Association has reached its 60-day fundraising goal of $500,000. We also reached it in record time; in just over 30 days instead of the planned 60!

It has been really inspiring to see how the community rallied to help. With this behind us, we can look forward to the planned launch of Drupal 9 on June 3rd and our first virtual DrupalCon in July.

I'd like to thank all of the individuals and organizations who contributed to the #DrupalCares fundraising campaign. The Drupal community is stronger than ever! Thank you!

11 May 2020 7:00pm GMT

10 May 2020

feedPlanet Grep

Mattias Geniar: Dissecting the code responsible for the Bitcoin halving

In a few hours, the Bitcoin network will experience its third "halving". So what is it and how does it work under the hood?

10 May 2020 12:00am GMT

Mattias Geniar: A look at the code for the Bitcoin halving

In a few hours, the Bitcoin network will experience its third "halving". So what is it and how does it work under the hood?

10 May 2020 12:00am GMT

08 May 2020

feedPlanet Grep

Xavier Mertens: [SANS ISC] Using Nmap As a Lightweight Vulnerability Scanner

I published the following diary on isc.sans.edu: "Using Nmap As a Lightweight Vulnerability Scanner":

Yesterday, Bojan wrote a nice diary about the power of the Nmap scripting language (based on LUA). The well-known port scanner can be extended with plenty of scripts that are launched depending on the detected ports. When I read Bojan's diary, it reminded me of an old article that I wrote on my blog a long time ago. The idea was to use Nmap as a lightweight vulnerability scanner. Nmap has a scan type that tries to determine the service/version information running behind an open port (enabled with the '-sV' flag). Based on this information, the script looks for interesting CVE in a flat database. Unfortunately, the script was developed by a third-party developer and was never integrated into the official list of scripts… [Read more]

[The post [SANS ISC] Using Nmap As a Lightweight Vulnerability Scanner has been first published on /dev/random]

08 May 2020 10:35am GMT

06 May 2020

feedPlanet Grep

Xavier Mertens: [SANS ISC] Keeping an Eye on Malicious Files Life Time

I published the following diary on isc.sans.edu: "Keeping an Eye on Malicious Files Life Time":

We know that today's malware campaigns are based on fresh files. Each piece of malware has a unique hash and it makes the detection based on lists of hashes not very useful these days. But can we spot some malicious files coming on stage regularly or, suddenly, just popping up from nowhere… [Read more]

[The post [SANS ISC] Keeping an Eye on Malicious Files Life Time has been first published on /dev/random]

06 May 2020 10:23am GMT

05 May 2020

feedPlanet Grep

Mattias Geniar: Creating a 2-of-3 multisig with raw transactions on EOSIO

These instructions can be followed to create a 2-out-of-3 multisignature address on the EOS blockchain (or any derivative thereof).

05 May 2020 12:00am GMT