25 Apr 2017
According to a new study from UC Berkeley's Haas Institute for a Fair and Inclusive Society, AT&T has been focused on deploying fiber-to-the-home in the higher-income neighborhoods of California, giving wealthy people access to gigabit internet while others are stuck with DSL internet that doesn't even meet state and federal broadband standards. Ars Technica reports: California households with access to AT&T's fiber service have a median income of $94,208, according to "AT&T's Digital Divide in California," in which the Haas Institute analyzed Federal Communications Commission data from June 2016. The study was funded by the Communications Workers of America, an AT&T workers' union that's been involved in contentious negotiations with the company. By contrast, the median household income is $53,186 in California neighborhoods where AT&T provides only DSL, with download speeds typically ranging from 768kbps to 6Mbps. At the low end, that's less than 1 percent of the gigabit speeds offered by AT&T's fiber service. The median income in areas with U-verse VDSL, which ranges from 12Mbps to 75Mbps, is $67,021. In 4.1 million California households, representing 42.8 percent of AT&T's California service area, AT&T's fastest speeds fell short of the federal broadband definition of 25Mbps downloads and 3Mbps uploads, the report said.
Read more of this story at Slashdot.
25 Apr 2017 11:40pm GMT
When choosing a hotel, it's good to consider amenities, hospitality, and quality in addition to price. These brands and chains are the hotels with the highest customer satisfaction marks for 2017.
25 Apr 2017 11:30pm GMT
randomErr writes: Last year, Netflix tried to go into China but ran into regulatory issues. So Netflix has entered into a licensing deal with iQiyi. iQiyi was founded in 2010 by Baidu in a very similar way that Google owns YouTube. What Netflix content will be shown and how the subscription service will work has yet to be announced.
Read more of this story at Slashdot.
25 Apr 2017 11:10pm GMT
Machine applies equal pressure across the surface area of the juice bags.
25 Apr 2017 10:51pm GMT
BarbaraHudson writes: A murdered woman's Fitbit data shows she was still alive an hour after her husband claims she was murdered and he was tied up, contradicting her husband's description of events. New York Daily News reports: "Richard Dabate, 40, was charged this month with felony murder, tampering with physical evidence and making false statements following his wife Connie's December 2015 death at their home in Ellington, Tolland County. Dabate called 911 reporting that his wife was the victim of a home invasion, alleging that she was shot dead by a 'tall, obese man' with a deep voice like actor Vin Diesel's, sporting 'camouflage and a mask,' according to an arrest warrant. Dabate alleged her death took place more than an hour before her Fitbit-tracked movements revealed."
Read more of this story at Slashdot.
25 Apr 2017 10:40pm GMT
The droid can scan 300 license plates a minute.
25 Apr 2017 10:38pm GMT
While cocktails aren't exactly good for you-alcohol is a toxin after all-some drinks can be more dangerous than others. These dicey craft cocktail ingredients can be found in bars all over the place.
25 Apr 2017 10:20pm GMT
Adequate Man Which TV Shows Have The Most People Boned To? | Jezebel Bill O'Reilly Didn't Harass Me, But His Viewers Did | Fusion Judge Blocks Trump's Executive Order Threatening Sanctuary Cities | The Root What Happened to Your Revolution, Bernie Sanders? |
25 Apr 2017 10:02pm GMT
With higher prices, Mylan allegedly dangled deep discounts-if buyers excluded rival.
25 Apr 2017 9:58pm GMT
It was strange to me, the idea that somewhere at Google there is a database containing 25-million books and nobody is allowed to read them. It's like that scene at the end of the first Indiana Jones movie where they put the Ark of the Covenant back on a shelf somewhere, lost in the chaos of a vast warehouse. It's there. The books are there. People have been trying to build a library like this for ages - to do so, they've said, would be to erect one of the great humanitarian artifacts of all time - and here we've done the work to make it real and we were about to give it to the world and now, instead, it's 50 or 60 petabytes on disk, and the only people who can see it are half a dozen engineers on the project who happen to have access because theyâre the ones responsible for locking it up. I asked someone who used to have that job, what would it take to make the books viewable in full to everybody? I wanted to know how hard it would have been to unlock them. What's standing between us and a digital public library of 25 million volumes? You'd get in a lot of trouble, they said, but all you'd have to do, more or less, is write a single database query. You'd flip some access control bits from off to on. It might take a few minutes for the command to propagate. You know those moments, when reading about history, where you think "how could these people have been so stupid? Why didn't drinking from, defecating in and washing in the same body of water raise a red flag? Why did people think slavery was an a-ok thing to do? Why did they sacrifice children to make sure the sun would rise in the morning? Were these people really that stupid?" A hundred years from now, people are going to look back upon the greatest library of mankind, filled with countless priceless works that nobody has access to, fully indexed, ready to go at a push of a button - this invaluable, irreplaceable treasure trove of human culture, and think, "how could these people have been so stupid?"
25 Apr 2017 7:41pm GMT
Apple released its Environmental Responsibility Report Wednesday, an annual grandstanding effort that the company uses to position itself as a progressive, environmentally friendly company. Behind the scenes, though, the company undermines attempts to prolong the lifespan of its products. Apple's new moonshot plan is to make iPhones and computers entirely out of recycled materials by putting pressure on the recycling industry to innovate. But documents obtained by Motherboard using Freedom of Information requests show that Apple's current practices prevent recyclers from doing the most environmentally friendly thing they could do: Salvage phones and computers from the scrap heap. Having "old" but perfectly usable products in the marketplace is a terrible place for a company like Apple to be in. Most computers, smartphones, and tablets from, say, the past 4-5 years are still perfectly fine and usable today, and a lot of people would be smart to buy one of these "old" devices instead of new ones. Except, of course, that Apple doesn't get a dime when people do that. So, they have "recycling" companies destroy them instead. Remember: profit always comes before customer. Apple is executing an environment and sustainability PR campaign right now through its usual PR outlets - don't be fooled.
25 Apr 2017 6:47pm GMT
24 Apr 2017
So the recently recovered source code to Darwin 0.1 corresponds with the release of the PowerPC only OS X Server 1.0. However as we all found out, Darwin will still built and maintained on Intel, as it was a very secretive plan B, in case something went wrong with the PowerPC platform. Being portable had saved NeXT before, and now it would save Apple. So with this little background, and a lot of stumbling around in the dark, I came up with some steps, that have permitted me to build the Darwin 0.1 kernel under DR2. This is beyond awesome.
24 Apr 2017 10:40pm GMT
19 Oct 2016
Well, it's hanging on in there, but why didn't it conquer the world?
Analysis Does European Commissioner for Competition Margrethe Vestager's team pay close attention to the tech news? If not, perhaps they should.…
19 Oct 2016 10:24am GMT
17 Oct 2016
Linus Torvalds teaches devs a lesson with early rc1 release
Google may have killed off its modular smartphone Project Ara idea, but some of the code that would have made it happen looks like coming to the Linux Kernel.…
17 Oct 2016 6:58am GMT
Your weekly Windows entertainment large and small
This week's worldwide BSOD roundup starts with what looks to your writer like a virtualisation launch bug. Submitter Alexander tells us it came from Peterborough Station, in Cambridgeshire.…
17 Oct 2016 6:28am GMT
21 May 2016
Die Copy Trader ist die einfache und innovative Art und Weise , Geld online mit Forex Trading zu verdienen. Es ist ein gültiges und weithin bewährte System , gefolgt von vielen kleinen Investoren auf der ganzen Welt. Leider ist in Italien, sind sie so gut wie unbekannt diese Spiegel Handelssysteme oder Programme, mit denen Sie […]
21 May 2016 4:05pm GMT
28 Jun 2015
Just a short hint for all fans of chess programs. PicoChess 0.43 has been released.
28 Jun 2015 11:02pm GMT
20 May 2012
On Sunday, May 20th 2012, people in a narrow strip from Japan to the western United States will be able to see an annular solar eclipse, the first in 18 years. The moon will cover as much as 94% of the sun. An Annular Solar Eclipse is different from a Total Solar Eclipse, when the […]
20 May 2012 9:51pm GMT
09 Nov 2011
In the last year the number of World of Warcraft subscribers has fallen in the from 12 million to 10.3 million...
09 Nov 2011 11:55am GMT
Via YouTube user DarkSydeGeoff, we came across a Battlefield 3 exploit that allows friends to boost enormous amounts of experience in hardcore matches...
09 Nov 2011 1:43am GMT
06 Nov 2011
Tyrs is a microblogging client, supporting Twitter and Status.net (identi.ca), it's based on console using the NCurses module from Python. The release of the 0.5.0 version is a good excuse to introduce Tyrs. Tyrs aims to get a good interaction with a fairly intuitive interface that can provide support ncurses. Tyrs tries also not to [...]
06 Nov 2011 9:43pm GMT
05 Nov 2011
After one year of managing a network of 10 servers with Cfengine I'm currently building two clusters of 50 servers with Puppet (which I'm using for the first time), and have various notes to share. With my experience I had a feeling Cfengine just isn't right for this project, and didn't consider it seriously. These servers are all running Debian GNU/Linux and Puppet felt natural because of the good Debian integration, and the number of users whom also produced a lot of resources. Chef was out of the picture soon because of the scary architecture; CouchDB, Solr and RabbitMQ... coming from Cfengine this seemed like a bad joke. You probably need to hire a Ruby developer when it breaks. Puppet is somewhat better in this regard.
Puppet master needs Ruby, and has a built-in file server using WEBrick. My first disappointment with Puppet was WEBrick. Though PuppetLabs claim you can scale it up to 20 servers, that proved way off, the built-in server has problems serving as little as 5 agents/servers, and you get to see many dropped connections and failed catalog transfers. I was forced to switch to Mongrel and Nginx as frontend very early in the project, on both clusters. This method works much better (even though Apache+Passenger is the recommended method now from PuppetLabs), and it's not a huge complication compared to WEBrick (and Cfengine which doesn't make you jump through any hoops). Part of the reason for this failure is my pull interval, which is 5 minutes with a random sleep time of up to 3 minutes to avoid harmonics (which is still a high occurrence with these intervals and WEBrick fails miserably). In production a customer can not wait on 30/45 minute pull intervals to get his IP address whitelisted for a service, or some other mundane task, it must happen within 10 minutes... but I'll come to these kind of unrealistic ideas a little later.
Unlike the Cfengine article I have no bootstrapping notes, and no code/modules to share. By default the fresh started puppet agent will look for a host called "puppet" and pull in what ever you defined to bootstrap servers in your manifests. As for modules, I wrote a ton of code and though I'd like to share it, my employer owns it. But unlike Cfengine v3 there's a lot of resources out there for Puppet which can teach you everything you need to know, so I don't feel obligated to even ask.
Interesting enough, published modules would not help you get your job done. You will have to write your own, and your team members will have to learn how to use your modules, which also means writing a lot of documentation. Maybe my biggest disappointment is getting disillusioned by most Puppet advocates and DevOps prophets. I found articles and modules most of them write, and experiences they share have nothing to do with the real world. It's like they host servers in a magical land where everything is done in one way and all servers are identical. Hosting big websites and their apps is a much, much different affair.
Every customer does things differently, and I had to write custom modules for each of them. Just between these two clusters a module managing Apache is different, and you can abstract your code a lot but you reach a point where you simply can't push it any more. Or if you can, you create a mess that is unusable by your team members, and I'm trying to make their jobs better not make them miserable. One customer uses an Isilon NAS, the other has a content distribution network, one uses Nginx as a frontend, other has chrooted web servers, one writes logs to a NFS, other to a Syslog cluster... Now imagine this on a scale with 2,000 customers and 3 times the servers and most of the published infrastructure design guidelines become laughable. Instead you find your self implementing custom solutions, and inventing your own rules, best that you can...
I'm ultimately here to tell you that the projects are in a better state then they would be with the usual cluster management policy. My best moment was an e-mail from a team member saying "I read the code, I now understand it [Puppet]. This is fucking awesome!". I knew at that moment I managed to build something good (or good enough), despite the shortcomings I found, and with nothing more than using PuppetLabs resources. Actually, that is not completely honest. Because I did buy and read the book Pro Puppet which contains an excellent chapter on using Git for collaboration on modules between sysadmins and developers, with proper implementation of development, testing and production (Puppet)environments.
05 Nov 2011 11:17pm GMT
Creating json is now ten times easier.
05 Nov 2011 3:10am GMT
13 May 2011
Some words about history of Planet Sun. For round about six years Planet Sun has been an aggregation of public weblogs written by employees of Sun Microsystems. Though it never was a product or publication of Sun Microsystems itself. The website was powered by Planet and run by David Edmondson. On 01 Mar 2010 David […]
13 May 2011 12:36am GMT