3901 stories
·
63 followers

They Might Be Giants - Lake Monsters (Official Video)

1 Comment
From: ParticleMen
Duration: 03:21

lake monsters of the USA from Cape Cod to Massachusetts
all across America from Chicago to East California

you might be asking yourself “is this a different world?
are they really there?” thank your lucky stars!

lake monsters of the USA just looking for a polling station
if it takes the cloak of darkness their voices will be counted

you might be asking yourself “what happened to this world?”
but they’re not ashamed
they can not be tamed

no hypnosis like a mass hypnosis because a mass hypnosis isn’t happening
no hypnosis like a mass hypnosis because a mass hypnosis isn’t happening
oh the silence
oh the silence

Dial-A-Song Video Week 33

Directed by Hine Mizushima

I Like Fun is here.
Get I Like Fun direct from TMBG - http://bit.ly/ILikeFun
or at iTunes - http://geni.us/iILF
or at Amazon - http://geni.us/aILF
or in the UK/EU - http://ljx.cc/TMBG_ILikeFun

Read the whole story
jepler
2 hours ago
reply
Beautiful
Earth, Sol system, Western spiral arm
Share this story
Delete

VW Group, BMW and Daimler Are Under Investigation For Collusion In Europe

1 Comment
The European Commission has launched an antitrust investigation into the Volkswagen Group, BMW and Daimler, over allegations they colluded to keep certain emissions control devices from reaching the market in Europe, according to a statement the Commission released on Tuesday. CNET reports: The technologies the group allegedly sought to bury include a selective catalytic reduction system for diesel vehicles, which would help to reduce environmentally problematic oxides of nitrogen in passenger cars, and "Otto" particulate filters that trap particulate matter from gasoline combustion engines.

"The Commission is investigating whether BMW, Daimler and VW agreed not to compete against each other on the development and roll-out of important systems to reduce harmful emissions from petrol and diesel passenger cars," said Commissioner Margrethe Vestager, head of competition policy for the European Commission, in a statement. "These technologies aim at making passenger cars less damaging to the environment. If proven, this collusion may have denied consumers the opportunity to buy less polluting cars, despite the technology being available to the manufacturers."

Read the whole story
jepler
2 hours ago
reply
offs
Earth, Sol system, Western spiral arm
Share this story
Delete

IBM is Being Sued For Age Discrimination After Firing Thousands

1 Comment and 2 Shares
A lawyer known for battling tech giants over the treatment of workers has set her sights on International Business Machines Corp. Bloomberg reports: Shannon Liss-Riordan on Monday filed a class-action lawsuit in federal court in Manhattan on behalf of three former IBM employees who say the tech giant discriminated against them based on their age when it fired them. Liss-Riordan, a partner at Lichten & Liss-Riordan in Boston, has represented workers against Amazon, Uber and Google and has styled her firm as the premier champion for employees left behind by powerful tech companies. "Over the last several years, IBM has been in the process of systematically laying off older employees in order to build a younger workforce," the former employees claim in the suit, which draws heavily on a ProPublica report published in March that said the company has fired more than 20,000 employees older than 40 in the last six years.

The lawsuit comes as IBM faces questions about its firing practices. In exhaustive detail, the ProPublica report made the case that IBM systematically broke age-discrimination rules. Meanwhile, the Equal Employment Opportunity Commission has consolidated complaints against IBM into a single, targeted investigation, according to a person familiar with it.
Further reading: IBM Fired Me Because I'm Not a Millennial, Alleges Axed Cloud Sales Star in Age Discrim Court Row, and IBM is Telling Remote Workers To Get Back in the Office Or Leave.
Read the whole story
jepler
20 hours ago
reply
I have a friend in her 40s who was terminated from her position at IBM this year, and I couldn't help wondering about this.
Earth, Sol system, Western spiral arm
tingham
20 hours ago
Similar situation but ~6/7 years ago.
Share this story
Delete

A Solar Filament Erupts

1 Comment and 2 Shares
A Solar Filament Erupts What's happened to our Sun? Nothing very unusual -- it just threw a filament. Toward the middle of 2012, a long standing solar filament suddenly erupted into space producing an energetic Coronal Mass Ejection (CME). The filament had been held up for days by the Sun's ever changing magnetic field and the timing of the eruption was unexpected. Watched closely by the Sun-orbiting Solar Dynamics Observatory, the resulting explosion shot electrons and ions into the Solar System, some of which arrived at Earth three days later and impacted Earth's magnetosphere, causing visible aurorae. Loops of plasma surrounding an active region can be seen above the erupting filament in the featured ultraviolet image. Although the Sun is now in a relatively inactive state of its 11-year cycle, unexpected holes have opened in the Sun's corona allowing an excess of charged particles to stream into space. As before, these charged particles are creating auroras.
Read the whole story
jepler
3 days ago
reply
looks like art, is real
Earth, Sol system, Western spiral arm
Share this story
Delete

AT&T and Verizon want to manage your identity across websites and apps

3 Comments and 4 Shares

The four major US mobile carriers have unveiled a system that would let them manage your logins across any third-party website or app that hooks into it.

"Project Verify" from a consortium of AT&T, Verizon Wireless, T-Mobile US, and Sprint, was unveiled in a demo yesterday. It works similarly to other multi-factor authentication systems by letting users approve or deny login requests from other websites and apps, reducing the number of times users must enter passwords. The carriers' consortium is putting the call out to developers of third-party apps and websites, who can contact the consortium for information on linking to the new authentication system.

"The Project Verify app can be preloaded or downloaded to the user's mobile device," a video describing the technology says. "And then when they face a login screen on their favorite sites and apps, they select the verify option. That's it—Project Verify does the rest."

The carriers hope to launch Verify next year.

Introducing Project Verify.

The carrier system would verify each person's identity with "a multi-factor profile based around the user's personal mobile device," taking into account the user's phone number, account tenure, IP address, phone account type, and SIM card details. The system "combines the carriers' proprietary, network-based authentication capabilities with other methods to verify a user's identity," the carriers say.

Users would be able to log in to Project Verify-linked sites or apps by selecting the verify option within those apps or sites. The Project Verify app would let them manage which sites and apps are linked to their mobile identity.

Is this a good idea?

But do you want your carrier managing your logins across the websites and apps you use on your phone? AT&T, Verizon, T-Mobile, and Sprint aren't exactly the tech industry's best protectors of security and privacy.

The four major carriers were recently caught leaking the real-time location of most US cell phones. After facing pressure from Sen. Ron Wyden (D-Ore.), the carriers agreed to stop selling their mobile customers' location information to third-party data brokers.

The carriers don't face any major rules preventing them from misusing their customers' Web-browsing and app-usage data. Last year, the mobile carriers and other Internet providers convinced Congress and President Trump to prevent implementation of rules that would have forced them to get customers' opt-in consent before using, sharing, or selling their browsing and app-usage histories for advertising purposes.

“I don’t trust the carriers”

There are good reasons to be skeptical of the carriers' ability to securely manage logins, security reporter Brian Krebs wrote yesterday.

"A key question about adoption of this fledgling initiative will be how much trust consumers place with the wireless companies, which have struggled mightily over the past several years to validate that their own customers are who they say they are," Krebs wrote. He continued:

All four major mobile providers currently are struggling to protect customers against scams designed to seize control over a target's mobile phone number. In an increasingly common scenario, attackers impersonate the customer over the phone or in mobile retail stores in a bid to get the target's number transferred to a device they control. When successful, these attacks—known as SIM swaps and mobile number port-out scams—allow thieves to intercept one-time authentication codes sent to a customer's mobile device via text message or automated phone call.

AT&T VP Johannes Jaskolski, who is managing the carriers' Project Verify consortium, told Krebs that the system will not centralize subscriber data into a multi-carrier database.

"We're not going to be aggregating and centralizing this subscriber data, which will remain with each carrier separately," Jaskolski said. Verify "will be portable by design and is not designed to keep a subscriber stuck to one specific carrier." It will let the user maintain "control of whatever gets shared with third parties," he added.

But Krebs is still skeptical, and so is security researcher Nicholas Weaver of the International Computer Science Institute at UC Berkeley. Krebs paraphrased Weaver as saying that "this new solution could make mobile phones and their associated numbers even more of an attractive target for cyber thieves."

"The carriers have a dismal track record of authenticating the user," Weaver also said. "If the carriers were trustworthy, I think this would be unequivocally a good idea. The problem is I don't trust the carriers."

Read the whole story
jepler
4 days ago
reply
The idea of a telephone as a second factor (via SMS or app) is bad enough. The idea of it as an only factor (via proprietary carrier methods) is worse.
Earth, Sol system, Western spiral arm
satadru
4 days ago
reply
Haha fuck no...
New York, NY
Share this story
Delete
1 public comment
reconbot
4 days ago
reply
This will only be used on media sites owned by these companies and by thieves
New York City

Autobuilding Debian packages on salsa with Gitlab CI

1 Comment

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages:
  - build
  - autopkgtest
.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts autoconf automake adduser fakeroot sudo
  - autoreconf -f -i
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/
.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null
build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid
test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages:
  - build
  - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts autoconf automake adduser fakeroot sudo
  - autoreconf -f -i
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

Read the whole story
jepler
4 days ago
reply
beautiful
Earth, Sol system, Western spiral arm
Share this story
Delete
Next Page of Stories