Spanish police have arrested three people over a data breach linked to a series of dramatic intrusions at European spy software companies—feeding speculation that the net has closed on an online Robin Hood figure known as Phineas Fisher.
A spokesman with Mossos d’Esquadra, Catalonia’s regional police, said two men and a woman were arrested Tuesday in Salamanca and Barcelona on suspicion of breaking into the website of the Mossos labor union in May, hijacking its Twitter feed and leaking the personal data of more than 5,500 officers. No more arrests are expected, he added, speaking on condition of anonymity in line with force policy.
The arrests sent rumors flying online because the breach had been claimed by Phineas Fisher, a hacker who first won notoriety in 2014 for publishing data from Britain’s Gamma Group—responsible at the time for spyware known as FinFisher. The hacker, or group of hackers, cemented their reputation by claiming responsibility for a breach at Italy’s Hacking Team in 2015—a spectacular dump which exposed the inner workings of government espionage campaigns—and appearing as a hand puppet in an unusual interview in 2016.
The Andover, England-based Gamma Group did not immediately return messages left after hours. Neither did FinFisher, the Munich-based company which now sells the eponymous intrusion tool. Hacking Team spokesman Eric Rabe said he had “no special insight” into the arrests but declined to comment on whether his company was in touch with Spanish authorities.
Toni Castejon, the general secretary of the Catalan police union that was hit, said the language used by the hijacked Twitter account led him to doubt Phineas Fisher had been involved. The tweets were written “by somebody with perfect knowledge of a very informal kind of Catalan (language) that would have been impossible to achieve through online translation,” he said.
Explore further: 15-year-old arrested over British cyber attack
find on : https://phys.org/news/2017-01-spain-hacking-phineas-fisher.html
The company behind Snapchat has two offerings – that beloved, 5-year-old app for messaging and video streaming, and Spectacles, a months-old, $130 pair of sunglasses that double as a camcorder.
The Los Angeles company has promised more gadgets will follow. But even with a significant increase, hardware sales in the near term probably would bring in 100 times less revenue than selling ads displayed on Snapchat.
It’s that potentially massive, multibillion-dollar ad business that has investors most excited about Snap Inc., which is expected to open its stock to public trading in the coming weeks in one of the tech industry’s highest-anticipated initial public offerings in years.
So why is Snap insisting it’s actually a camera company?
The label introduced last year raised questions about Snap’s hardware ambitions. But more than signaling that Snap is the next Apple Inc. – a firm that makes software and hardware but derives about two-thirds of its revenue from iPhone sales – the distinction could be an attempt to help investors see the nuance between the goals of Snapchat and those of its most potent rival, Facebook.
Unlike Facebook, Snap’s not out to connect everyone on the planet. Rather, its goal is to tinker with both the physical make of cameras and the code behind them, giving people new ways to chat with friends, have fun together and educate themselves about the world. Both companies rely on advertising revenue, but Snap, which declined to comment, appears to be suggesting its approach will be more focused.
“They need to show they are not just Facebook for teens,” said Gene Munster, who studied Apple’s finances for years and now co-manages investment firm Loup Ventures. Tech companies that enjoy the most sustained success have visions beyond what’s visible to most today, and Snap is arranging itself to join that group.
“Twenty years from now, the way we engage with the world will probably not be a phone,” Munster said. “Hardware changes are going to be happening, and this mission gives them a foothold and foundation to be prepared for this transition.”
Though the camera, both in apps and in gadgets, will be central to that aim, investors and the financial analysts who advise them ascribe minimal value to Spectacles and other hardware. They insist Snap’s real value is in the advertising business. If Snap wishes otherwise, it will have a long way to go before changing perceptions.
“I would be hard-pressed to imagine them as a hardware company unless it’s possible to see a long-term commitment to that business,” said Brian Wieser, who follows companies such as Facebook for Pivotal Research. “So for now, it’s an ad tech company.”
Still, Snap joins financial technology company Square Inc. as one of the first internet companies with revenue coming from both hardware and software at the time of an initial public stock offering. That split helps diversify its business, but it means Snap also will have to justify to investors any hardware-related expenses.
Snap has shared limited financial data with potential investors and met only with a select group of analysts. More could become clear when it publicly shares its stock prospectus, which could be as early as this week.
Experimentation doesn’t necessarily hurt share prices, said Scott Kessler, a financial analyst at CFRA. Amazon.com, Facebook and Google parent company Alphabet Inc. have gotten away with unrealized product goals as their core businesses continue to surge.
“People want to see these companies innovating and trying new things,” Kessler said.
But troubles can arise. For one, hardware can reduce earnings.
“Software is the way to go because that’s a more profitable business,” Kessler said. “Manufacturing things, that’s obviously more challenging from supply chain, cost perspective. It’s a lot different than someone going somewhere and downloading software.”
Still, companies often try to show investors before they go public that they are more than one-trick businesses. Ride-hailing service Uber Technologies Inc. has ventured into self-driving delivery trucks. Short-term rental booking giant Airbnb is trying to help consumers with more aspects of travel planning. Both could go public this year or next.
But companies new to public markets must live up to those promises or risk seeing their value fall.
The faltering shares of GoPro, which closed a much-heralded video-distribution business two years after an IPO, and Twitter, which couldn’t maintain user growth, reflects what happens when reality doesn’t meet expectations. For Square’s part, hardware has grown slightly as a portion of its revenue mix.
If anything, fear that many tech startups such as Snap are overvalued has led to more skepticism about second acts in the last year.
Chinese phone maker and social media app developer Meitu has seen its shares barely budge from their initial price since going public a month ago. About 95 percent of Meitu’s revenue comes from phone sales, and analysts question how fast the software business can grow.
In Snap’s favor is that its second revenue line already has inklings of success. Spectacles have received positive reviews. Investors point to the long lines that Snap generated by selling the sunglasses through roving vending machines. It’s a wacky experience that has energized the industry. And because of Snap’s generally small release of thousands of pairs, investors simply are discounting the idea for now.
“It seems like a noble experiment akin to Google Glass, but not yet a central part of the Snap value proposition,” said Chris Rust, a founder at Clear Ventures who held a board observer role at GoPro.
Instead, Snap’s biggest challenge could be convincing investors to notice the distinctions with Facebook and showing them that profit is within sight.
Alexander Stimpson, co-chief investment officer at Newport Beach, Calif., money manager Corient Capital Partners, said he’s worried that companies going public before demonstrating recurring profitability have turned investors into speculators. It forces them to invest based on instincts rather than formulas. And despite the great risks, they stand to gain a much smaller return than the venture capitalists who held shares prior to the IPO.
“If a company is unprofitable, the rewards should be substantial because you’re taking substantial risk,” he said.
Because Snap isn’t yet profitable, Stimpson doesn’t mind coming late to the party when it may be a safer bet.
“When there’s no earnings there, it forces investors to behave in a way that’s against their best interests to be successful long term,” he said. “Investors are successful when they are disciplined about valuations, when profits matter, when metrics matter, when they buy low and sell high.”
Still, many analysts expect the excitement to be so great that Snap gets whatever price it wants. Hitting the stock market could bring Snap’s value upward of $25 billion.
“Any time you have a brand-name company, you’re going to have a lot of interest,” said Ivan Feinseth, director of research at Tigress Financial Partners. “They are very strong in the teen, preteen and the millennial market. They’re a key player.”
Explore further: Snapchat parent to offer shares on NYSE: reports
find on : https://phys.org/news/2017-01-spectacles-investors-snapchat-advertising.html
Japanese video game maker Nintendo Co.’s third-quarter profit more than doubled from a year earlier on healthy sales of Pokemon game software, the company said Tuesday.
Nintendo, which makes Super Mario games and will start selling the Switch console March 3, reported a better-than-expected October-December profit of 64.7 billion yen ($569 million), up from 29.1 billion yen in the same period of 2015.
Kyoto-based Nintendo raised its full year profit forecast to 90 billion yen ($792 million) from an earlier 50 billion yen ($440 million). That would mark a more than five-fold increase from what it earned the previous fiscal year.
It kept its sales forecast unchanged at 470 billion yen ($4.1 billion). Nintendo’s quarterly sales slipped 21 percent to 174.3 billion yen ($1.5 billion).
Nintendo’s bottom line also was helped by a relatively weak yen, which lifts the overseas revenue for Japanese companies like Nintendo that do much of their business abroad.
Nintendo has a lot riding on the Switch, the new game system that combines a portable hand-held device with a dock to use at home, and comes with detachable controllers. Although new machines tend to sell briskly at first, it’s difficult to maintain sales momentum.
Nintendo’s previous devices struggled against competition from smartphones and other mobile devices, which also offer entertainment.
The company said in a statement the success of the “Pokemon Go” augmented-reality game for smartphones last year led to bigger Pokemon game sales for Nintendo’s own portable 3DS machine in recent months.
After resisting switching to games on cellphones for years, fearing that could erode sales of its own consoles, Nintendo made its big push into mobile with “Super Mario Run” for the iPhone, which launched late last year.
At first, it was a big hit but the interest has quickly fizzled out. Nintendo said an Android version of the game will become available in March.
Explore further: Pokemon Go boost limited as Nintendo cuts profit forecast
find on : https://phys.org/news/2017-01-nintendo-quarter-profit-pokemon-game.html
The handwritten signature is still the most widely accepted biometric used to verify a person’s identity. Banks, corporations, and government bodies rely on the human eye and digital devices such as tablets or smart pens to capture, analyse, and verify people’s autographs.
New software developed by researchers at Tel Aviv University and Ben-Gurion University of the Negev now enables smartwatches, currently worn by one in six people around the world, to verify handwritten signatures.
The accompanying study was recently published on arXiv. It is available at https://arxiv.org/abs/1612.06305.
“A popular device worn by so many people should feature additional, critically useful functions,” said study co-author Dr. Erez Shmueli of TAU’s Department of Industrial Engineering, who added that 373 million of these devices will be in use by 2020. “Considering how dependent we are on signatures, we decided to develop software that would verify the smartwatch device wearer’s handwritten signature.”
The next step in signature verification
Signing on a digital pad or using a special electronic pen has replaced pen and paper in many instances, but these alternatives often require cumbersome dedicated devices. The new software developed by Dr. Shmueli and his student Alona Levy, in collaboration with Prof. Yuval Elovici of BGU’s Department of Software and Information Systems Engineering and his student Ben Nassi, would turn any generic smartwatch into an expert signature verifier.
The novel technology utilizes motion data—a person’s wrist movements measured by an accelerometer or a gyroscope—to uniquely identify them during the signing process and subsequently classify the signature as either genuine or forged.
“Using a wrist-worn device such as a smartwatch or a fitness tracker bears obvious advantages over other wearable devices, since it measures the gestures of the entire wrist rather than a single finger or an arm,” said Dr. Shmueli. “While several other recent studies have examined the option of using motion data to identify users, this is its first application to verify handwritten signatures—still a requirement at the bank, the post office, your human resources department, etc.”
The team tested its system on 66 TAU undergraduates. The students, all wearing smartwatches, were asked to provide 15 signature samples on a tablet, using the tablet’s digital pen. The students were then shown video recordings of people signing during the first phase, and were asked to forge five of those signatures. The students were given ample time to practice and were compensated for “exceptional forgeries.”
The smartwatch, equipped with the new verification software, was able to detect forgery with an extremely high level of accuracy.
“Next we plan to compare our approach with existing state-of-the-art methods for offline and online signature verification,” said Dr. Shmueli. “We would also like to investigate the option of combining data extracted from the wearable device with data collected from a tablet device to achieve even higher verification accuracy.”
The researchers have applied for a patent in an initial step toward commercializing their system.
Explore further: New smartwatch application for accurate signature verification developed
find on : https://phys.org/news/2017-01-smartwatch-software-signatures.html
Homes with solar panels do not require on-site storage to reap the biggest economic and environmental benefits of solar energy, according to research from the Cockrell School of Engineering at The University of Texas at Austin. In fact, storing solar energy for nighttime use actually increases both energy consumption and emissions compared with sending excess solar energy directly to the utility grid.
In a paper published in Nature Energy on Jan. 30, researchers assessed the trade-offs of adding home energy storage to households with existing solar panels, shedding light on the benefits and drawbacks of adding storage considering today’s full energy grid mix.
According to the Solar Energy Industry Association, the number of rooftop solar installations grew to more than 1 million U.S. households in 2016. There is a growing interest in using energy storage to capture solar energy to reduce reliance on traditional utilities. But for now, few homes have on-site storage to hold their solar energy for later use in the home.
“The good news is that storage isn’t required to make solar panels useful or cost-effective,” said co-author Michael Webber, a professor in the Department of Mechanical Engineering and deputy director of UT Austin’s Energy Institute. “This also counters the prevailing myth that storage is needed to integrate distributed solar power just because it doesn’t produce energy at night.”
Webber and co-author Robert Fares, a Cockrell School alumnus who is now an American Association for the Advancement of Science fellow at the U.S. Department of Energy, analyzed the impact of home energy storage using electricity data from almost 100 Texas households that are part of a smart grid test bed managed by Pecan Street Inc., a renewable energy and smart technology company housed at UT Austin.
They found that storing solar energy for nighttime use increases a household’s annual energy consumption—in comparison with using solar panels without storage—because storage consumes some energy every time it charges and discharges. The researchers estimated that adding energy storage to a household with solar panels increases its annual energy consumption by about 324 to 591 kilowatt-hours.
“I expected that storage would lead to an increase in energy consumption,” Fares said. “But I was surprised that the increase could be so significant—about an 8 to 14 percent increase on average over the year.”
The researchers also found that adding storage indirectly increases overall emissions of carbon dioxide, sulfur dioxide and nitrogen dioxide based on today’s Texas grid mix, which is primarily made up of fossil fuels. The increase in emissions is primarily due to the increase in energy consumption required to account for storage inefficiencies. Because storage affects what time of day a household draws electricity from the grid, it also influences emissions in that way.
If a homeowner is seeking to reduce his or her environmental footprint, adding storage would not make the household more green, but it shouldn’t be dismissed either, the researchers said.
“Solar combined with storage is still a lot cleaner than having no solar at all,” Fares said.
For utility companies, the benefits are more clear cut. Solar energy storage reduces peak grid demand by 8 to 32 percent and the magnitude of solar power injections to the grid by 5 to 42 percent. This is good for the utility because it can reduce the amount of electricity generation and delivery capacity required.
“However, if the utility is interested in reducing emissions, incentivizing home storage is probably not a good idea,” Fares said.
In short, the analysis showed that storing solar energy today offers fewer environmental benefits than just sending it straight to the grid, because the energy lost to storage inefficiencies is ultimately made up with fossil-fuel electricity from the grid. “These findings challenge the myth that storage is inherently clean, but that, in turn, offers useful insights for utility companies,” Webber said.
“If we use the storage as the means to foster the adoption of significantly more renewables that offset the dirtiest sources, then storage—done the right way and installed at large-scale—can have beneficial impacts on the grid’s emissions overall,” Webber said.
Explore further: Energy storage project to help homes be less reliant on grid
The impacts of storing solar energy in the home to reduce reliance on the utility, Nature Energy , nature.com/articles/doi:10.1038/nenergy.2017.1
find on : https://phys.org/news/2017-01-solar-power-energy-consumption-emissions.html
General Motors Co. and Honda Motor Co. took a big step toward putting out vehicles powered by hydrogen fuel cells by forming a joint venture to produce the systems for both companies’ vehicles.
The automakers expect to begin production in 2020 at a GM battery-pack facility south of Detroit, creating about 100 new jobs. They’ll also work together on setting up fueling stations to make the cars marketable.
And executives say the use of fuel cell systems may not be limited to cars. They said at a joint news conference Monday in Detroit that they’re exploring military, aerospace and even residential uses for the systems, which generate electricity to power vehicles.
The companies are equally sharing the $85 million cost of the venture, which came from a cooperative agreement on fuel cells that began in July of 2013. Executives said costs have come down dramatically since then and the new fuel cell system has become smaller, lighter, less complex and more durable.
The fuel cell producing part of the system has been reduced to the size of a box that would come close to fitting onto an airplane as carry-on luggage. A first-generation system from GM took up the entire floor space in a van, executives said.
GM isn’t ready to say when it might have a fuel cell car ready to go on sale widely to the public. But product development chief Mark Reuss said that’s not the only use. Fuel cells could have military applications as well as aerospace and even as home power generators, he said.
“The Army is very interested in that,” Reuss said. “We’ve also done a lot of aerospace exploration on some of the backup systems that may be in some of those planes.”
Honda started delivering the third generation of its Clarity fuel cell vehicle to U.S. customers in December.
Although GM and Honda have trimmed costs on the system by reducing the amount of precious metals in it and making it more efficient, the cost still will be higher than an internal combustion engine when production starts in 2020, said Charlie Freese, GM’s executive director of global fuel cell business.
“We do think the building blocks are in place to close that gap,” he said. “We are taking a lot of cost out to make the system much more affordable.”
Explore further: GM, Honda partner on fuel cell vehicle development
find on : https://phys.org/news/2017-01-gm-honda-team-advanced-hydrogen.html
Compilers are programs that convert computer code written in high-level languages intelligible to humans into low-level instructions executable by machines.
But there’s more than one way to implement a given computation, and modern compilers extensively analyze the code they process, trying to deduce the implementations that will maximize the efficiency of the resulting software.
Code explicitly written to take advantage of parallel computing, however, usually loses the benefit of compilers’ optimization strategies. That’s because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren’t sure how to interpret the new code, so they don’t try to improve its performance.
At the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel Programming next week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will present a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution.
As a consequence, says Charles E. Leiserson, the Edwin Sibley Webster Professor in Electrical Engineering and Computer Science at MIT and a coauthor on the new paper, the compiler “now optimizes parallel code better than any commercial or open-source compiler, and it also compiles where some of these other compilers don’t.”
That improvement comes purely from optimization strategies that were already part of the compiler the researchers modified, which was designed to compile conventional, serial programs. The researchers’ approach should also make it much more straightforward to add optimizations specifically tailored to parallel programs. And that will be crucial as computer chips add more and more “cores,” or parallel processing units, in the years ahead.
The idea of optimizing before adding the extra code required by parallel processing has been around for decades. But “compiler developers were skeptical that this could be done,” Leiserson says.
“Everybody said it was going to be too hard, that you’d have to change the whole compiler. And these guys,” he says, referring to Tao B. Schardl, a postdoc in Leiserson’s group, and William S. Moses, an undergraduate double major in electrical engineering and computer science and physics, “basically showed that conventional wisdom to be flat-out wrong. The big surprise was that this didn’t require rewriting the 80-plus compiler passes that do either analysis or optimization. T.B. and Billy did it by modifying 6,000 lines of a 4-million-line code base.”
Schardl, who earned his PhD in electrical engineering and computer science (EECS) from MIT, with Leiserson as his advisor, before rejoining Leiserson’s group as a postdoc, and Moses, who will graduate next spring after only three years, with a master’s in EECS to boot, share authorship on the paper with Leiserson.
Forks and joins
A typical compiler has three components: the front end, which is tailored to a specific programming language; the back end, which is tailored to a specific chip design; and what computer scientists oxymoronically call the middle end, which uses an “intermediate representation,” compatible with many different front and back ends, to describe computations. In a standard, serial compiler, optimization happens in the middle end.
The researchers’ chief innovation is an intermediate representation that employs a so-called fork-join model of parallelism: At various points, a program may fork, or branch out into operations that can be performed in parallel; later, the branches join back together, and the program executes serially until the next fork.
In the current version of the compiler, the front end is tailored to a fork-join language called Cilk, pronounced “silk” but spelled with a C because it extends the C programming language. Cilk was a particularly congenial choice because it was developed by Leiserson’s group—although its commercial implementation is now owned and maintained by Intel. But the researchers might just as well have built a front end tailored to the popular OpenMP or any other fork-join language.
Cilk adds just two commands to C: “spawn,” which initiates a fork, and “sync,” which initiates a join. That makes things easy for programmers writing in Cilk but a lot harder for Cilk’s developers.
With Cilk, as with other fork-join languages, the responsibility of dividing computations among cores falls to a management program called a runtime. A program written in Cilk, however, must explicitly tell the runtime when to check on the progress of computations and rebalance cores’ assignments. To spare programmers from having to track all those runtime invocations themselves, Cilk, like other fork-join languages, leaves them to the compiler.
All previous compilers for fork-join languages are adaptations of serial compilers and add the runtime invocations in the front end, before translating a program into an intermediate representation, and thus before optimization. In their paper, the researchers give an example of what that entails. Seven concise lines of Cilk code, which compute a specified term in the Fibonacci series, require the compiler to add another 17 lines of runtime invocations. The middle end, designed for serial code, has no idea what to make of those extra 17 lines and throws up its hands.
The only alternative to adding the runtime invocations in the front end, however, seemed to be rewriting all the middle-end optimization algorithms to accommodate the fork-join model. And to many—including Leiserson, when his group was designing its first Cilk compilers—that seemed too daunting.
Schardl and Moses’s chief insight was that injecting just a little bit of serialism into the fork-join model would make it much more intelligible to existing compilers’ optimization algorithms. Where Cilk adds two basic commands to C, the MIT researchers’ intermediate representation adds three to a compiler’s middle end: detach, reattach, and sync.
The detach command is essentially the equivalent of Cilk’s spawn command. But reattach commands specify the order in which the results of parallel tasks must be recombined. That simple adjustment makes fork-join code look enough like serial code that many of a serial compiler’s optimization algorithms will work on it without modification, while the rest need only minor alterations.
Indeed, of the new code that Schardl and Moses wrote, more than half was the addition of runtime invocations, which existing fork-join compilers add in the front end, anyway. Another 900 lines were required just to define the new commands, detach, reattach, and sync. Only about 2,000 lines of code were actual modifications of analysis and optimization algorithms.
To test their system, the researchers built two different versions of the popular open-source compiler LLVM. In one, they left the middle end alone but modified the front end to add Cilk runtime invocations; in the other, they left the front end alone but implemented their fork-join intermediate representation in the middle end, adding the runtime invocations only after optimization.
Then they compiled 20 Cilk programs on both. For 17 of the 20 programs, the compiler using the new intermediate representation yielded more efficient software, with gains of 10 to 25 percent for a third of them. On the programs where the new compiler yielded less efficient software, the falloff was less than 2 percent.
“For the last 10 years, all machines have had multicores in them,” says Guy Blelloch, a professor of computer science at Carnegie Mellon University. “Before that, there was a huge amount of work on infrastructure for sequential compilers and sequential debuggers and everything. When multicore hit, the easiest thing to do was just to add libraries [of reusable blocks of code] on top of existing infrastructure. The next step was to have the front end of the compiler put the library calls in for you.”
“What Charles and his students have been doing is actually putting it deep down into the compiler so that the compiler can do optimization on the things that have to do with parallelism,” Blelloch says. “That’s a needed step. It should have been done many years ago. It’s not clear at this point how much benefit you’ll gain, but presumably you could do a lot of optimizations that weren’t possible.”
Explore further: System that automatically handles database caching in server farms increases speed and reliability
find on : https://phys.org/news/2017-01-middle-popular-yields-more-efficient-parallel.html
EPFL researchers have developed an algorithm for automated vehicles to operate in traffic alongside manually-driven vehicles. This is a key step in the shift towards autonomous driving expected to be achieved by 2030.
One thing is certain: one day our cars will drive themselves. But how will we make the transition from a handful of autonomous and connected cars today to a true smart system offering enhanced safety, comfort and seamless and robust operation, in just 15 years’ time? Researchers who worked on the European AutoNet2030 project believe it can be achieved by combining driving assistance technologies and inter-vehicle communications. They have recently shown that it is possible for vehicles with or without drivers to operate in high-speed, multi-lane traffic autonomously under real-life conditions. This is a key step in the ongoing shift towards autonomous driving. EPFL’s contribution to this project came in the form of a cooperative maneuvering control algorithm.
Thanks to a communication protocol based on Wi-Fi, vehicles can now share information among each other. This, combined with an array of driving-assistance devices – GPS, lasers, video cameras and other sensors – gives vehicles the ability to drive completely on their own. That said, it will be another 15 years before most vehicles are equipped with these devices, heralding a true driverless future.
Cooperation and autonomy
In the next few years, how will these cutting-edge vehicles rolling off the assembly lines fit into the traffic system alongside legacy vehicles? One option under study is to have automated vehicles travel in convoys. For example, a manually-driven truck could lead a platoon of autonomous tractor trailers moving at a constant speed and at an equal distance from each other. This approach has been successfully tested over hundreds of kilometers in Australia. The only problem is that this type of convoy behaves as a discrete block which, above a given number of vehicles, becomes increasingly difficult to manage.
The AutoNet2030 researchers came up with another solution: a cooperative and distributed system. Out goes the leader, as each connected vehicle communicates directly with other vehicles in the immediate vicinity. They then adjust their speed and position independently of each other. The convoy has no trouble driving on one or more lanes on a highway or reconfiguring when another vehicle joins the group. Each vehicle also benefits from its neighbors’ ‘eyes’, effectively enjoying 360 degree perception. What’s more, there’s no upper limit to the size of the convoy in theory, since each member positions itself independently.
Simple units, complex behavior
Convoys are managed using control software based on an algorithm developed by EPFL’s Distributed Intelligent Systems and Algorithms Laboratory (DISAL). “We have been working on this type of distributed control algorithm for around ten years. Simply put, the idea is to find a way for agents that are not particularly clever – robots or cars – to work together and achieve complex group behavior,” said Alcherio Martinoli, the head of DISAL. In mathematical terms, this means that the algorithm uses information that it receives from the agents’ sensors to guide the convoy’s movements in real time. The convoy automatically and constantly reorganizes when, for example, another vehicle joins or leaves it, it changes lanes, or it adapts to target speeds. The DISAL researchers began by managing robots on simulators before moving on to real miniature robots and then to cars on simulators. Finally, as part of the AutoNet2030 project, they managed to get to real vehicles on the road.
The final demonstration took place at the end of October 2016 in Sweden, on the AstaZero test track. Three vehicles were used: an automated truck and car and – a key aspect of the project – a networked though manually-driven car. The researchers equipped the non-automated car with GPS and laser sensors and a human-machine interface allowing the driver to follow instructions on joining the convoy.
“It may not seem so impressive with only three cars, but for the first time we were able to validate what we had achieved in the simulation. And the number of vehicles in the convoy has no impact on the complexity of the control mechanism,” said Martinoli. What’s next? “This is a proof of concept,” said Guillaume Jornod, the EPFL scientist who ran the trials. “We are hoping that, with a rise in demand, carmakers will come up with ever cheaper solutions for converting legacy vehicles, that they will coordinate their efforts with the community working on the Internet of things, and that we will be able to deploy and improve this multi-lane convoy system for heterogeneous vehicles.”
Explore further: Autonomous-driving Volvo convoy takes road in Spain
find on : https://phys.org/news/2017-01-driver-vehicles-cooperate.html
Wittingly or not, major global corporations are helping fund sites that traffic in fake news by advertising on them.
Take, for instance, a story that falsely claimed former President Barack Obama had banned Christmas cards to overseas military personnel. Despite debunking by The Associated Press and other fact-checking outlets, that article lives on at “Fox News The FB Page,” which has no connection to the news channel although its bears a replica of its logo.
And until recently, the story was often flanked by ads from big brands such as the insurer Geico, the business-news outlet Financial Times, and the beauty-products maker Revlon.
This situation isn’t remotely an isolated case, although major companies generally say they have no intention of bankrolling purveyors of fake news with their ad dollars. Because many of their ads are placed on websites by computer algorithms, it’s not always easy for these companies to steer them away from sites they find objectionable.
Google, the biggest player in the digital ad market, places many of these ads. The company says it bars ads on its network from appearing against “misrepresentative content”—its term for fake news—yet Google spokeswoman Andrea Faville acknowledged that the company had sold ads on the site with the Christmas-card story. Those ads vanished after The Associated Press inquired about them. Faville declined to comment on their disappearance.
ADS THAT GO WHERE THEY WILL
Media advertising was much simpler when companies had only to buy ad space in newspapers or magazines to reach readers in a particular demographic category. Digital ads, by contrast, can wind up in unexpected places because they’re placed by automated systems, not sales teams, and targeted at individuals rather than entire demographics.
In effect, these ads follow potential customers around the web, where a tangle of networks and exchanges place them into ad slots at online publications. These middlemen have varying standards and levels of interest in helping advertisers ensure that their ads avoid controversy.
“A brand wouldn’t have a real foolproof way of not getting on sites that have issues like this,” said Joseph Galarneau, CEO of the New York City startup Mezzobit, which helps publishers and marketers manage advertising technology.
AUTOMATIC FAKE-NEWS FUNDING
Such automated ads are a major income source for fake news stories, which may have influenced voters in the U.S. presidential election. False stories can undermine trust in real news—and they can be dangerous. A widely shared but untrue story that pegged a Washington, D.C., pizzeria as part of a Hillary Clinton-run child sex trafficking ring led a man to fire a gun in the restaurant.
This largely invisible web of automated exchanges and ad networks funds millions of online sites, from niche, small-traffic blogs to professional news and entertainment sites with audiences in the tens of millions. By tracking web users to smaller sites, advertisers can reach them more cheaply than by limiting themselves to “premium” websites like the Washington Post, CBS or ESPN.
The megaphone of social media can give marginal sites a big lift. When a fake-news story spreads on Facebook, lots of people end up on the article’s original site—and ads follow. The result: Big companies help fund some low-rent websites trafficking in conspiracy theories and other unverified claims, at the measly rate of a fraction of a cent per person per ad.
WHERE “FAKE” FALLS THROUGH THE CRACKS
While advertising technology vendors have safeguards in place to help mainstream advertisers avoid porn or hate speech, those don’t always work for spoof news sites, said Marc Goldberg, CEO of Trust Metrics. Advertisers pay him to keep them off unwanted sites.
That’s partly because “fake news” can be hard to define. And while advertisers can come up with “blacklists” of sites to avoid, there’s no guarantee that ad-tech vendors farther down in the food chain will honor it, said Susan Bidel, an advertising analyst for research firm Forrester.
Many publishers and advertisers use Google’s ad technology without having Google sell their ads. In those cases, Google’s misrepresentative-content policy doesn’t apply.
BRANDS IN A BIND
When the AP pointed out that a Chrysler Ram truck ad popped up on a story saying that the United Nations was making the U.S. pay reparations to African-Americans—it’s not—Fiat Chrysler said it works with ad companies to scour individual sites and block them from loading its ads if it finds them “harmful.”
An ad for would-be Amazon rival Jet.com, owned by Walmart, showed up on a misleading story claiming California had legalized child prostitution . The company said in an emailed statement that it has filters that stop its ads from loading “on these kinds of sites,” but wouldn’t provide more detail or explain its criteria.
Walgreens ads also popped up next to the child prostitution story on the site The Red Elephants, but the drugstore chain has since prevented its ads from appearing there, a company spokesman said.
A person who responded to an email sent to The Red Elephants declined to discuss the site’s advertising, but insisted that the child-prostitution story was true. The person declined to provide their name.
A Financial Times spokeswoman said in an emailed statement that the media company was “frustrated” to learn that its ads appeared next to fake news like the Christmas-card story, saying the situation underscored the “very real risk” of using automated ads. “We think the ad technology ecosystem could, and should, do more to improve brand safety,” she said.
Revlon declined to comment. A Geico spokeswoman said the company didn’t know about its ad that ran on the spoof Fox News site.
Explore further: Study: Ad-tech use shines light on fringe, fake news sites
find on : https://phys.org/news/2017-01-intentionally-big-brands-fund-fake.html