28 May 2015
The HITB crew is back in the beautiful city of Amsterdam for a new edition of their security conference. Here is my wrap-up for the first day!
The opening keynote was assigned to Marcia Hofmann who worked for the EFF (the Electronic Frontier Foundation). Her keynote title was: "Fighting for Internet Security in the New Crypto Wars". EFF always fight for more privacy and she reviewed the history of encryption and all the bad stories about it. It started with a fact: "We need strong encryption but we need some backdoors". Since encryption algorithms were developed, developers received pressure from governments to implement backdoors or use weak(er) keys to allow interception… just in case of! This already happened before and will happen again.
In a prologue, Marcia explained how everything started with the development of RSA and Diffie-Hellman. For some other protocols like DES, it was clear that the development team was very close to the NSA. They deliberately asked to use weakest key to help them to brute-force the keys. In the 80's and 90's, cryptography was developed more and more by the private sector and academic researches. Personal computers raised and people started to encrypt their data. Then came the famous PGP, key escrow and clipper chip. Marcia also explained the CALEA ("Communications Assistance Law Enforcement Act"): The technology must be designed in the way the FBI could intercept communications if needed (of course, with a proper warrant). Then came the restriction about encryption stuff export outside the US. It was a good opportunity for Marcia to give some words about the Wassenaar agreement and the recent story about the project to prevent export of intrusion software and surveillance technology. Today, the Snowden era, governments are seen as attackers. NSA was able to tamper all our data but also infiltrated major Internet players. What about the future? What should happened and what should we do? Marcia took a pendular as example. Some forces have impacts on the way a pendula moves. There are different external pressures which affect how security is designed:
After the morning coffee break, I went to the second track to follow Pedram Hayati's presentation: "Uncovering Secret Connections Using Network Theory and Custom Honeypots". The first part gave background information about our classic defence model and honeypots. In a traditional security model, the perimeter is very hardened but once the intruder is inside, nothing can stop him. We keep the focus on making the perimeter stronger. It's coming from the physical security approach (the old castles) and attackers put all the efforts to bypass a single high barrier. Our second problem? We enter a battle without knowing the attackers. Idea of active defines and protection. Active defines is defined as:
A security approach that actively increases the cost of performing an attack in terms of time, effort and required resources to the point where a successful compromise against a target is impossible.
To achieve this, we need:
- Profile the attacker
- Disrupt its tasks
The Foundation is knowing the attacker! So what tools do we have? Our logs of course but honeypots can be very helpful! In the second part, Pedram explained what honeypots are… "a decoy system to lure attacker". They increase the cost of a successful attack: the attacker will spend time in the honeypot. It is fundamental that it looks legitimate but it has signatures and behaviour, that's why it must be fully configurable to lure the attacker. Some principle:
- Do not fake network services or re-implement a network protocol.
- Segregation of duties (interaction, monitoring, storage)
- Smart Deployment (use an unused public IP, internal network, previously used IP)
The next section was the experiment. Pedram deployed 13 honeypots in major cloud providers (AWS, Google), distributed across the Internet. They mimic a typical server and have IP addresses not published (no domain mapping). The goal was to identify SSH attacks, discover attacks profile per region and relations between them. How long to detect the first infection? On average, less than 10 mins! An analyse of the collected data was performed and it was possible to classify the attacker in three categories:
Pedram also explained how he generated nice statistics about the attackers, their behaviour and locations. To conclude, he compared the attackers as somebody throwing bricks through windows. How to we react? We can take actions to prevent this guy from sending more bricks or we can buy bullet-proof windows. It's the same with information security. Try to get rid of the attackers!
My next choice was a talk about mobile phones operators: "Bootkit via SMS: 4G Access Level Security Assessment" presented by Timur Yunusov and Kirill Nesterov. Today, 3G/4G network are not only used by people to surf on Facebook but are also used more and more for M2M ("machine to machine") communications. They explained that many operators have GGSN ("GPRS Gateway Support Node") facing the Internet (just use Shodan to find some). A successful attack against such devices can lead to DoS, leak on information, fraud, APN guessing.
BTW, do you know that, when you are out of credit, telco's block TCP traffic but UDP remains available? It's time to use your UDP VPN! But attacking the network in this way is not new, the speakers focused on another path: attacking the network via SMS! About the hardware, they investigated some USB modems used by many computers. Such devices are based on Linux/Android/Busybox and have many interesting features. Most of them suffer of basic vulnerabilities like XSS, CSRC, ability to brick the device. They showed a demo video to demonstrate an XSS attack and CSRF to steal the user password. If you can own the device, the next challenge is to own the computer using the USB modem! To achieve this, they successfully turned to modem into an HID device. It is first detected as a classic RNDIS device then it is detected as a keyboard and operates like a Teensy to inject keyboard keypresses. You own the modem, the computer, what about the SIM card? They explained in details how they achieve this step and ended with a demonstration where they remotely cloned a SIM card and captured GSM traffic! The best advice they can give as a concluse: always change your PIN code!
After the lunch, Didier Stevens and myself gave our workshop about the IOS forensics. I did not attend two talks but my next choice was to listen to Bas Venis, a very young security researcher, who talked about browsers: "Exploiting Browsers the Logical Way". The presentation was based on "logic" bugs. No need to use debuggers and other complicated tools to find such vulnerabilities. Bas explained the Chrome URL spoofing vulnerability and he discovered it (CVE-2013-6636).
Then he switched to the Flash players. The goal was to evade the sandox. After explaining the different types of sandboxes (remote, local_with_file, local_with_network, local_trusted and application), I explained that the logic of URL/URI is not rock solid in sandboxes and lead to CVE-2014-0535. The conclusion was that looking for logic bugs and using them proven to be a sensitive approach when trying to hack browsers. Sweet results can be found and they do not require tools but just dedication and creativity. Just a remark about the quality of the video, almost unreadable on the big plasma screens installed in the room.
Finally, the first day ended with a rock-start: Saumil Shah who presented "Stegosploit: Hacking with Pictures". This presentation is the next step in Saumil's research about owning the user with pictures. In 2014 at hack.lu, he already presented "Hacking with pictures". What's new? Saumil insisted in the fact that "A good exploit is one delivered with style". Pwning the browser can be complicated, why not just find a way to simple trick the exploit? The first part was a review of the history of steganography which is a technique used to hide a message into a picture, without altering it. Then came the principle of GIFAR: One file file with a JAR file appended to it. Then webshells raised with the embedding of tags like "<?php>" or "<% … %>". Finally EXIF data were used (example to deliver a XSS).
Stegosploit is not a new 0-day exploit with a nice name and logo. It's a technique to deliver browser exploits via pictures. To achieve this we need an attack payload, a "safe" decoder which can transform pixels into dangerous data. How?
- From the network, the images must only see images (ex: IDS)
- The exploit is hidden in pixels, no change in the picture itself
- The image auto-run upon load (the decoder must be bundled with the image)
- The exploit must be automatically decoded and triggered
Be conservative in what you send and liberal in what you receive. (Jon Postel)
This closed the first day! Note that slides are uploaded after each talk and available here.
28 May 2015 9:08pm GMT
They had already made commodity-class hardware in cloud a standard and their R&D team seems to be heading the same way for Virtual Reality, too.
From a cardboard Virtual Reality goggle to affordable VR recording. Impressive stuff.
GoPro's Jump-ready 360 camera array uses HERO4 camera modules and allows all 16 cameras to act as one. It makes camera syncing easy, and includes features like shared settings and frame-level synchronization.
If this works as simple as advertised we'll be seeing a lot of VR content in the nearby future.
Is Virtual Reality on the web really the next step?
- Virtual Reality As The Next Step After Reponsive Webdesign? Mozilla has a impressive demosite running at MozVR.com. Over the...
28 May 2015 7:11pm GMT
- Les bases de la programmation fonctionnelle (allegretto)
- Les traits (acciacatura)
- L'API collections (adagio)
- Les implicites (accelerando non tropo)
- L'asynchronisme (allegro fortissimo)
Sur la scène, le ténor sera Alexis Vandendaele, développeur Java chez Sfeir et passionné pour les nouvelles technologies montantes web ou non. L'objectif de sa conférence sera de vous présenter une vue d'ensemble des capacités du Scala afin de le démystifier.
28 May 2015 7:26am GMT
27 May 2015
Just read an article on BBC News that starts of with the AdBlock Plus team winning another case in a German court (yeay) and ended with a report on how Firefox also has built-in tracking protection which -for now- is off by default and is somewhat hidden. To enable it, just open about:config and set privacy.trackingprotection.enabled to true. I disabled Ghostery for now, let's see how how things go from here.
Possibly related twitterless twaddle:
27 May 2015 4:46pm GMT
The webempresa.com team contacted me a couple of days ago to let me know they created a small tutorial on the installation & basic configuration of Autoptimize, including this video of the process;
The slowdown noticed when activating JS optimization is due to the relative cost of aggregating & minifying the JS. To avoid this overhead for each request, implementing a page caching solution (e.g. HyperCache plugin or a Varnish-based solution) is warmly recommended.
Possibly related twitterless twaddle:
27 May 2015 3:34pm GMT
26 May 2015
I may be an Apple fan, but I LOL'd at this major feature that leaked.
Alongside this, the company plans to tweak the keyboard to work better in both landscape and portrait keyboard mode and will make it easier to tell when the shift key is selected.
If new OS details leak and they mention fixing the shift key, you're doing something wrong.
26 May 2015 6:51pm GMT
(an update of an older post, now complete up to vSphere 6)
Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on.
26 May 2015 12:02pm GMT
As usual I heard this on KCRW earlier today; Daniel Lanois leaving his roots-oriented songwriting for some pretty spaced-out instrumentals with a jazz-like and sometime straight drum & bass feel to them. This is "Opera", live;
You can watch, listen & enjoy more live "flesh & machine"-material in this live set on KCRW.
Possibly related twitterless twaddle:
26 May 2015 5:27am GMT
25 May 2015
The post Custom Fonts On Kindle Paperwhite First Generation appeared first on ma.ttias.be.
I have something to admit: I'm a bit obsessive when it comes to fonts.
Nobody notices it, but this blog has changed more fonts in the last 6 months than I care to remember. Chaparral-Pro, Open Sans, HelveticaNeue-Light, ... they've all been used.
For me, a proper font makes or breaks a reading experience. In fact, I can still remember when just a few hours before launching our corporate website, I mentioned this "cool font I just came across" to Chris, and we decided to switch to HelveticaNeue-Light for the site, last minute.
Even our animated video got a font-change because of my obsessiveness. All for the better, of course.
So as much as the internet is about fonts and typography, kerning and whitespace, it sort of surprises me that the e-book world isn't. Or maybe it is, but it's only for the newer generation Ebooks.
I'm at my second Kindle now and I'd buy one again in a heartbeat. It's absolutely brilliant. But after a few years, you begin to notice the outdatedness of the device. It looks and feels old. To me, that's in large part because of the default font Caecilia.
Here's what it looks like.
It looks bold and makes the device feel older than it really is.
On the Kindle, I've used that font for many years. Largely because I had no idea that I could change the font in the first place. But it's readable and there really isn't much bad about it. It gets the job done.
Here's what it looks like on the Kindle itself, as the cover page of Becoming Steve Jobs.
A very quick way of making the Kindle feel new again is by changing to the other built-in fonts, more specifically Palatino.
It's more in line with modern typography as seen on the web: a lighter font that vaguely resembles HelveticaNeue.
But the default font options are limited. There are 6 included in my Paperwhite. And because geeks will be geeks, I wanted a font I choose.
First, make sure you're on the 5.3.1 version. I've read some blogposts about alternative methods working on 5.3.1+ versions, but none of them seemed to work for me. Download the 5.3.1 binary image here.
Next disable WiFi on the PaperWhite, because the auto-upgrades will break this functionality. You can enable it again later on, after you've added the fonts.
To downgrade the Kindle (in case you need to) follow these steps;
- Download earlier update file from Amazon: Kindle Paperwhite 1 Update 5.3.1
- Disable wifi on your Paperwhite (airplane mode).
- Connect your Kindle Paperwhite to your computer (do not disconnect until the last step).
- Copy the bin file you downloaded in step 1 to root folder of Paperwhite.
- Wait at least 2 minutes after copy has completed (the devices need to register the .bin internally).
- Push and hold power button until your Paperwhite restarts (the led blinks orange and light turns on at the screen).
- Wait until the Paperwhite has installed the upgrade (which is really a downgrade).
- Now you can DISCONNECT from your computer.
If you've done the steps right, the next time the Kindle boots it'll flash itself from the supplied .bin file.
Once you're on 5.3.1, getting the fonts activated is pretty easy.
Connect to the device to your PC and do these steps;
- In the root of the Kindle, make a file called "USE_ALT_FONTS" with no content. (
- Make a folder called fonts and drop your favourite font in there, in 4 versions each: Regular, Italic, Bold, BoldItalic. The filename needs to include those versions in the suffix, see the example below.
I downloaded the ChaparralPro font from Fontzone.
- After you've uploaded the fonts, reboot your Kindle
- After it booted, go to search in the dashboard/home screen and type
;fc-cacheas a command.
That forces the Kindle to rebuild its font database. After 4-5 minutes, the device will flash white and reload its UI, that is the sign that the reload finished. Take your time for this.
The command will look like it completed instantly, but is still running in the background.
Once it reboots, you'll find a lot more fonts available in the Font Selection window. Enabling the
USE_ALT_FONTS flag also unlocks other, already installed, fonts on the device.
After the Kindle booted, I chose the new Chaparral Pro font, increased the font size with 2 options higher than the default and we're good to go.
I'm really happy with the results: the Chaparral Pro font is very pleasing to read.
Here's a side-by-side comparison of the original, the on-board Palatino and the newly installed Chaparral Pro. Click on the image for a bigger view.
The photo quality is sloppy, as I took "screenshots" with my phone. That means the angle is off every time and the alignment just downright sucks. But it gets the message across.
I'm hoping the next e-reader I buy has simpler options for managing custom fonts and takes its typography more seriously.
The post Custom Fonts On Kindle Paperwhite First Generation appeared first on ma.ttias.be.
25 May 2015 9:03pm GMT
24 May 2015
Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:
- For squeeze-lts (oldoldstable), you check out the secure-testing repository, run a script from that repository that generates a DLA number and email template, commit the result, and send a signed mail (whatever format) to the relevant mailinglist. Uploads go to ftp-master with
squeeze-ltsas target distribution.
- For backports, you send a mail to the team alias requesting a BSA number, do the upload, and write the mail (based on a template that you need to modify yourself), which you then send (inline signed) to the relevant mailinglist. Uploads go to ftp-master with
$dist-backportsas target distribution, but you need to be in a particular ACL to be allowed to do so. However, due to backports policy, packages should never be in backports before they are in the distribution from which they are derived -- so I refrained from uploading to backports until the regular security update had been done. Not sure whether that's strictly required, but I didn't think it would do harm; even so, that did mean the procedure for backports was even more involved.
- For the distributions supported by the security team (stable and oldstable, currently), you prepare the upload yourself, ask permission from the security team (by sending a debdiff), do the upload, and then ask the security team to send out the email. Uploads go to security-master, which implies that you may have to use
-saparameter in order to make sure that the orig.tar.gz is actually in the security archive.
- For unstable and upstream, you Just Upload(TM), because it's no different from a regular release.
While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.
As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.
24 May 2015 7:18pm GMT
23 May 2015
Imagine you have an old and a new computer. You want to get rid of that old computer, but it still contains loads of files. Some of them are already on the new one, some aren't. You want to get the ones that aren't: those are the ones you want to copy before tossing the old machine out.
That was the problem I was faced with. Not willing to do this tedious task of comparing and merging files manually, I decided to wrote a small tool for it. Since it might be useful to others, I've made it open-source.
Here's how it works:
- Use dupefinder to generate a catalog of all files on your new machine.
- Transfer this catalog to the old machine
- Use dupefinder to detect and delete any known duplicate
- Anything that remains on the old machine is unique and needs to be transfered to the new machine
You can get in two ways: there are pre-built binaries on Github or you may use
go get github.com/rubenv/dupefinder/...
Usage should be pretty self-explanatory:
Usage: dupefinder -generate filename folder... Generates a catalog file at filename based on one or more folders Usage: dupefinder -detect [-dryrun / -rm] filename folder... Detects duplicates using a catalog file in on one or more folders -detect=false: Detect duplicate files using a catalog -dryrun=false: Print what would be deleted -generate=false: Generate a catalog file -rm=false: Delete detected duplicates (at your own risk!)
There's no doubt that you could use any language to solve this problem, but Go really shines here. The combination of lightweight-threads (goroutines) and message-passing (channels) make it possible to have clean and simple code that is extremely fast.
Internally, dupefinder looks like this:
Each of these boxes is a goroutine. There is one hashing routine per CPU core. The arrows indicate channels.
The beauty of this design is that it's simple and efficient: the file crawler ensures that there is always work to do for the hashers, the hashers just do one small task (read a file and hash it) and there's one small task that takes care of processing the results.
A multi-threaded design, with no locking misery (the channels take care of that), in what is basically one small source file.
Any language can be used to get this design, but Go makes it so simple to quickly write this in a correct and (dare I say it?) beautiful way.
And let's not forget the simple fact that this trivially compiles to a native binary on pretty much any operationg system that exists. Highly performant cross-platform code with no headaches, in no time.
The distinct lack of bells and whistles makes Go a bit of an odd duck among modern programming languages. But that's a good thing. It takes some time to wrap your head around the language, but it's a truly refreshing experience once you do. If you haven't done so, I highly recommend playing around with Go.
- How does it compare files?: It uses SHA256 hashes for each file.
- I deleted all my data and will sue!: Use this tool 100% at your own risk!
- Help!: Questions and problems on Github please.
23 May 2015 11:44am GMT
22 May 2015
It's a fact, in industries or on building sites, professional people make mistakes or, worse, get injured. Why? Because their attention is reduced at a certain point. When you're doing the same job all day long, you get tired and lack of concentration. The same can apply in information security! For a long time, more and more solutions are deployed in companies to protect their data and users. Just make your wishlist amongst firewalls, (reverse-)proxies, next-generation firewalls, ID(P)S, anti-virus, anti-malware, end-point protection, etc (The list is very long). Often multiple lines of defenses are implemented with different firewalls, segmented networks, NAC. The combination of all those security controls tend to reduce successful attacks to a minimum. "To tend" does not mean that all of them will be blocked! A good example are phishing emails, they remain a very good way to abuse people. If most of them will be successfully detected, only one may have disastrous impacts. Once dropped in a user mailbox, there are chances that the potential victim will be asleep… Indeed, the company spent a lot of money to protect its infrastructure so the user will think "My company is doing a good job at protecting myself, so if I receive a message in my mailbox, I can trust it!". Here is a real life example I'm working on.
A big organization received a very nicely formated email from a business partner. The mail had an attachment pretending to be a pending invoice and was sent to <email@example.com>. The person reading the information mailbox forwarded it, logically, to the accounting department. There, an accountant read the mail (coming from a trusted partner and forwarded by a colleague - what can go wrong?) and opened the attachment. No need to tell the rest of the story, you can imagine what happened. The malicious file was part of a new CBT-Locker campaign: The new malicious file was generated only a few hours before the attacks and, no luck, the installed solutions were not able (yet) to detect it. The malicious files passed successfully the following controls:
- Antivirus/Antispam on the incoming MTA in the DMZ
- A Next-Generation firewall (between the DMZ - LAN)
- Some extra checks on the internal Exchange server
- An end-point protection system
Users, don't fall aspleep! Keep your eyes open and keep in mind that the controls deployed by your company are a way to reduce the risks of attacks. You car has ABS, ESP, cross-lane detection systems and much more but you still need to pay attention to the road! The same applies in IT, stay safe…
22 May 2015 11:53am GMT
Earlier this week Matt Mullenweg, founder and CEO of Automattic, parent company of WordPress.com, announced the acquisition of WooCommerce. This is a very interesting move that I think cements the SMB/enterprise positioning between WordPress and Drupal.
As Matt points out a huge percentage of the digital experiences on the web are now powered by open source solutions: WordPress, Joomla and Drupal. Yet one question the acquisition may evoke is: "How will open source platforms drive ecommerce innovation in the future?".
Larger retailers with complex requirements usually rely on bespoke commerce engines or built their online stores on solutions such as Demandware, Hybris and Magento. Small businesses access essential functions such as secure transaction processing, product information management, shipping and tax calculations, and PCI compliance from third-party solutions such as Shopify, Amazon's merchant services and increasingly, solutions from Squarespace and Wix.
I believe the WooCommerce acquisition by Automattic puts WordPress in a better position to compete against the slickly marketed offerings from Squarespace and Wix, and defend WordPress's popular position among small businesses. WooCommerce brings to WordPress a commerce toolkit with essential functions such as payments processing, inventory management, cart checkout and tax calculations.
Drupal has a rich library of commerce solutions ranging from Drupal Commerce -- a library of modules offered by Commerce Guys -- to connectors offered by Acquia for Demandware and other ecommerce engines. Brands such as LUSH Cosmetics handle all of their ecommerce operations with Drupal, others, such as Puma, use a Drupal-Demandware integration to combine the best elements of content and commerce to deliver stunning shopping experiences that break down the old division between brand marketing experiences and the shopping process. Companies such as Tesla Motors have created their own custom commerce engine and rely on Drupal to deliver the front-end customer experience across multiple digital channels from traditional websites to mobile devices, in-store kiosks and more.
To me, this further accentuates the division of the CMS market with WordPress dominating the small business segment and Drupal further solidifying its position with larger organizations with more complex requirements. I'm looking forward to seeing what the next few years will bring for the open source commerce world, and I'd love to hear your opinion in the comments.
22 May 2015 3:38am GMT
21 May 2015
This is a simple but effective tool: rtop.
rtop is a remote system monitor. It connects over SSH to a remote system and displays vital system metrics (CPU, disk, memory, network). No special software is needed on the remote system, other than an SSH server and working credentials.
You could question why you wouldn't just SSH into the box and run
top, but hey, let's just appreciate
rtop for what it is: a simple overview of the systems' state and performance.
Not that hard, you just need the Go language runtime.
$ git clone --recursive http://github.com/rapidloop/rtop $ cd rtop $ make
For a few days, there was a problem with connecting over keys that use passphrases, but that was resolved in issue #16.
As easy as the installer.
rtop user@host:2222 1
This translates to;
- user: the SSH user to connect with
- host: the hostname / IP of the server to monitor
- 2222: optional, the SSH port
- 1: optional, the interval how often to query. Defaults to 5, which is a bit slow for me
And then you have your output.
./rtop user@host:2222 1 host.domain.tld up 57d 22h 32m 7s Load: 0.19 0.05 0.01 Processes: 1 running of 240 total Memory: free = 573.58 MiB used = 1.89 GiB buffers = 144.43 MiB cached = 1.05 GiB swap = 4.00 GiB free of 4.00 GiB Filesystems: /: 21.25 GiB free of 23.23 GiB Network Interfaces: eth0 - 192.168.10.5/26, fe80::aa20:66ff:fe0d/64 rx = 523.23 GiB, tx = 4972.94 GiB lo - 127.0.0.1/8, ::1/128 rx = 2.69 GiB, tx = 2.69 GiB
Pretty neat summary of the system.
- Taking Netflix's Vector (Performance Monitoring Tool) For A Spin Yet another fine piece of open source software coming from...
- Advanced Monitoring Of HAProxy With Zabbix Agent If you use HAProxy as a load balancing tool, and...
- SSH logins or rsync's without using a password prompt Most commonly SSH is used with a default username/password prompt...
21 May 2015 9:30pm GMT
Again an interesting ALA-article about web performance (or the lack thereoff), triggered by Facebook's "Instant Articles" announcement;
I think we do have to be better at weighing the cost of what we design, and be honest with ourselves, our clients, and our users about what's driving those decisions. This might be the toughest part to figure out, because it requires us to question our design decisions at every point. Is this something our users will actually appreciate? Is it appropriate? Or is it there to wow someone (ourselves, our client, our peers, awards juries) and show them how talented and brilliant we are?
This exercise clearly starts at the design-phase, because thinking about performance in development or testing-phase is simply too late.
Possibly related twitterless twaddle:
21 May 2015 5:17am GMT
20 May 2015
Nothing new, but I recently got reminded of this bitrot thing.
Let's talk about "bitrot," the silent corruption of data on disk or tape. One at a time, year by year, a random bit here or there gets flipped. If you have a malfunctioning drive or controller-or a loose/faulty cable-a lot of bits might get flipped. Bitrot is a real thing, and it affects you more than you probably realize.
The JPEG that ended in blocky weirdness halfway down? Bitrot. The MP3 that startled you with a violent CHIRP!, and you wondered if it had always done that? No, it probably hadn't-blame bitrot. The video with a bright green block in one corner followed by several seconds of weird rainbowy blocky stuff before it cleared up again? Bitrot.
If you're an Accidental Tech Podcast listener, you'll have heard the rants of John on HFS+ and Bitrot by now. Here's some reading material to keep you focussed;
- HFS+ Bit Rot
- Is bit rot on hard drives a real problem? (stackoverflow)
- Bitrot and atomic COWs: Inside "next-gen" filesystems
For the next few weeks, every unexplained filesystem corruption error I encounter will be blamed on bitrot.
- NsLookup Is More Powerful Than You Think Linux has a powerful DNS tool available through dig, which...
- System Calls In Apache (Linux) vs IIS (Windows) The following 2 pictures show a clear difference in system...
- Plesk Password retrieval via the command line (Linux / Windows) Plesk Controlpanel has several tools to retrieve the 'admin' password...
20 May 2015 7:08pm GMT