Connect with us


Playing the Pixelflut



Every hacker gathering needs as many pixels as its hackers can get their hands on. Get a group together and you’ll be blinded by the amount of light on display. (We propose “a blinkenlights” as the taxonomic name for such a group.) At a large gathering, what better way to show of your elite hacking ability than a “competition” over who can paint an LED canvas the best? Enter Pixelflut, the multiplayer drawing canvas.

Pixelflut has been around since at least 2012, but it came to this author’s attention after editor [Jenny List] noted it in her review of SHA 2017. What was that beguiling display behind the central bar? It turns out it was a display driven by a server running Pixelflut. A Pixelflut server exposes a display which can be drawn on by sending commands over the network in an extremely simple protocol. There are just four ASCII commands supported by every server — essentially get pixel, set pixel, screen size, and help — so implementing either a client or server is a snap, and that’s sort of the point.

While the original implementations appear to be written by [defnull] at the link at the top, in some sense Pixelflut is more of a common protocol than an implementation. In a sense, one “plays” one of a variety of Pixelflut minigames. When there is a display in a shared space the game is who can control the most area by drawing the fastest, either by being clever or by consuming as much bandwidth as possible.

Then there is the game of who can write the fastest more battle-hardened server possible in order to handle all that traffic without collapsing. To give a sense of scale, one installation at 36c3 reported that a truly gargantuan 0.5 petabytes of data were spent at a peak of rate of more than 30 gigabits/second, just painting pixels! That’s bound to bog down all by the most lithe server implementation. (“Flut” is “flood” in German.)

While hacker camps may be on pause for the foreseeable future, writing a performant Pixelflut client or server seems like an excellent way to sharpen one’s skills while we wait for their return. For a video example check out the embed after the break. Have a favorite implementation? Tell us about it in the comments!

Source link

قالب وردپرس

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Guide to Understanding Artificial Intelligence



Getting started with Artificial Intelligence can lead to numerous questions and confusion, given the speed the world is changing and adopting this technology. There are plenty of resources available online but there has to be a start point.

In this article, there is a brief introduction of Artificial Intelligence covering all its important aspects which one must go through to get a clear picture of this emerging technology. Artificial Intelligence has made commendable progress and is developing at a lightning-fast speed covering every industry of a market.

AI has become a necessity rather than an extra activity to know about this technology and its evolving faces.

The Basics Of Artificial Intelligence

Artificial Intelligence can be understood as a simulation of human intelligence. A simulation of human intelligence means that any task performed by a program or a machine will be carried out in the same way in which a human being would have done it.

Artificial intelligence cannot be given a single line definition. It has an ultra-wide scope to deal with problems and to learn through past experiences. The core part of artificial intelligence is the algorithms.

AI demonstrates some of the behavior that is linked with human intelligence such as planning, reasoning, learning, manipulation, creativity, and more.

The concept of what AI is and what it can do has changed from time to time. But the core idea can be explained as machines that can think and act like humans.

Developers and researchers are constantly working towards making the machines capable of interpreting the world around and picking up change whenever required.

These are some of the basic characteristics of a human being and machines are being taught extensively for a decade through algorithms and programs.

Different Types Of AI

There are many sub-parts of Artificial Intelligence but on a high level, it can be broadly divided into two types - narrow AI and general AI.

Narrow AI

Narrow AI can be seen in all the computing devices that people use in their day-to-day lives. That is why they know how they need to execute some of the functions on themselves. This is what makes our lives easier.

People just need to press some buttons and all other work will be done by the machine. For example, we can take the voice assistants that make smartphones way smarter than they were before.

Earlier people needed to do everything by themselves but now they can ask the voice assistant to do those things for them. They can ask it for the temperature, the time, to call a contact, to read out the messages, and a lot more. These assistants are improving even more as AI is improving.

A few years ago no one would have believed that something like AI would change the way technology behaved so much. And now the development industry is looking at it like they have never looked at anything in the past.

The AI tech is the technology that has the power to change the way humans and machines interacted. There can even be a time when people will not need to touch their machines for tasks like calling and writing emails.

The AI will teach the assistants and the applications about all these things. This is something that will help enterprises the most. This will help them to increase their productivity and will save a lot of their time.

The need to hire someone just to do their work will also be eliminated and that will reduce the expenses. There are many benefits of AI for both the general public and enterprises.

What Can Narrow AI Do?

This type of AI can help the traffic and surveillance departments by interpreting the video feeds that a drone or a CCTV camera takes. It can store information, categorize it, and give numbered reports to improve the services and to ensure the safety of an enterprise or a city.

The reason why governments want AI to be developed properly as soon as possible is that they will also get a lot of benefits. AI along with technologies like IoT can make cities smart and the administration smarter.

There are many applications of AI that can help authorities to analyze the current situations and make a better plan for the future. These things can be done with great quality with AI.

It can also organize and remind business people about their events and meetings. It can schedule emails, and make content personalized for better marketing and engagement with consumers.

These are just a few of the things that can be done by narrow AI, this is one of the things that have already made the applications and the devices that people use every day very smart.

Businesses don’t need to pay and use very high-end software because their smartphones have many advanced features in them. In enterprise development too narrow AI is doing a really great job.

Developers are trying to teach their application most of the things that can save the time of the users. AI makes social media, search engines, and websites smart.

What Is General AI And What Can It Do?

Now, this is the best form of AI as it teaches machines or software to do the things that human beings can do. This is the reason why it is called Artificial General Intelligence, which means machines that can work like people.

They can be taught how they can make reports or polish shoes or iron clothes. These are things that can help them to do specific tasks just like humans.

They might do them in an even better way than humans with more accuracy and speed. General AI developers are focusing on developing machines and software that reduce the efforts that people have to put into things that can be easily done by a machine.

This will not eat up jobs, instead, it will increase productivity to a great extent. This will make the world truly smart and intelligent.

This is about making machines that can understand things by themselves and then carry out the tasks for which they have been created.

They can be studying data or the environment and then according to the things that they can do, they can make decisions. This is intelligent and this is an artificial intelligence future technology.

Machine Learning

ML or machine learning is what makes an AI application intelligent enough to learn new things. This is the broad part of AI that most industries in the world are working on.

Because of this, a computer device takes up data and with more data, it gets more intelligent. It is like feeding a human being with food that makes them stronger. Data makes a device that has used ML smart and intelligent.

Because of this, the software can learn how they need to improve and personalize themselves according to their users. ML is also used in the voice assistants because of which they can remember what wesay and search related to that.

Worth the Read: How Technology is Giving Relief to Children’s Learning Ability

Elements Of Machine Learning

Machine learning can be regarded as a subset of AI and has mainly two elements, namely supervised learning and unsupervised learning.

Supervised Learning

This is a rather common technique for teaching systems. It is done by using a huge number of labeled examples in the form of data. The systems are filled with a large amount of data that identifies the features of interest.

These are then labeled into the system’s memory. Once the system is trained, these labels can be used to read data as well as create new data.

Unsupervised Learning

It is quite different from the above method of learning as this algorithm attempts to locate patterns in data. They cluster together these patterns to perform operations and give meaningful results.

Reinforcement Learning

This is a reward-based learning process. Here, rewards are processed according to their input data. This is basically a trial and error process and is greatly used in machine learning methods.

Worth the Read: 5 Advantages Artificial Intelligence can Give Your SME

AI Changing The World

AI will change the world in many different ways. There are different parts of industries that will be directly affected by it like robotics and the automobile sector.

AI will make robots and cars smart enough to be able to work on their own and control the things around them as per their programming. Though it will take some time to make them behave as normal as humans, even as of now they will be able to have a conversation, understand what people say, and respond to them.

Other parts that will be affected by it is the content that gets uploaded on the internet. Search engines have started using AI to filter out information and news that is fake.

This is the reason why now the internet does not list any website that is caught spreading fake news or information about something. This will make the internet a better place for normal people. These are just some of the ways, there are many other ways in which it will change the world.

Worth Read: How to Tame Artificial Intelligence: A Brief Guide for Business


Artificial Intelligence is the future. It has the scope to change the technology for the betterment of the world and the people living in it.

AI will help businesses to improve their processes, the government to improve the administration, and the general public to do daily work in an easy way.

The post Guide to Understanding Artificial Intelligence appeared first on ReadWrite.

Source link

قالب وردپرس

Continue Reading


The EU is launching a market for personal data. Here’s what that means for privacy.



The European Union has long been a trendsetter in privacy regulation. Its General Data Protection Regulation (GDPR) and stringent antitrust laws have inspired new legislation around the world. For decades, the EU has codified protections on personal data and fought against what it viewed as commercial exploitation of private information, proudly positioning its regulations in contrast to the light-touch privacy policies in the United States.

The new European data governance strategy (pdf) takes a fundamentally different approach. With it, the EU will become an active player in facilitating the use and monetization of its citizens’ personal data. Unveiled by the European Commission in February 2020, the strategy outlines policy measures and investments to be rolled out in the next five years.

This new strategy represents a radical shift in the EU’s focus, from protecting individual privacy to promoting data sharing as a civic duty. Specifically, it will create a pan-European market for personal data through a mechanism called a data trust. A data trust is a steward that manages people’s data on their behalf and has fiduciary duties toward its clients.

The EU’s new plan considers personal data to be a key asset for Europe. However, this approach raises some questions. First, the EU’s intent to profit from the personal data it collects puts European governments in a weak position to regulate the industry. Second, the improper use of data trusts can actually deprive citizens of their rights to their own data.

The Trusts Project, the first initiative put forth by the new EU policies, will be implemented by 2022. With a €7 million budget, it will set up a pan-European pool of personal and nonpersonal information that should become a one-stop shop for businesses and governments looking to access citizens’ information.

Global technology companies will not be allowed to store or move Europeans’ data. Instead, they will be required to access it via the trusts. Citizens will collect “data dividends,” which haven’t been clearly defined but could include monetary or nonmonetary payments from companies that use their personal data. With the EU’s roughly 500 million citizens poised to become data sources, the trusts will create the world’s largest data market.

For citizens, this means the data created by them and about them will be held in public servers and managed by data trusts. The European Commission envisions the trusts as a way to help European businesses and governments reuse and extract value from the massive amounts of data produced across the region, and to help European citizens benefit from their information. The project documentation, however, does not specify how individuals will be compensated.

Data trusts were first proposed by internet pioneer Sir Tim Berners Lee in 2018, and the concept has drawn considerable interest since then. Just like the trusts used to manage one’s property, data trusts may serve different purposes: they can be for-profit enterprises, or they can be set up for data storage and protection, or to work for a charitable cause.

IBM and Mastercard have built a data trust to manage the financial information of their European clients in Ireland; the UK and Canada have employed data trusts to stimulate the growth of the AI industries there; and recently, India announced plans to establish its own public data trust to spur the growth of technology companies.

The new EU project is modeled on Austria’s digital system, which keeps track of information produced by and about its citizens by assigning them unique identifiers and storing the data in public repositories.

Unfortunately, data trusts do not guarantee more transparency. The trust is governed by a charter created by the trust’s settlor, and its rules can be made to prioritize someone’s interests. The trust is run by a board of directors, which means a party that has more seats gains significant control.

The Trusts Project is bound to face some governance issues of its own. Public and private actors often do not see eye to eye when it comes to running critical infrastructure or managing valuable assets. Technology companies tend to favor policies that create opportunity for their own products and services. Caught in a conflict of interest, Europe may overlook the question of privacy.

And in some cases, data trusts have been used to strip individuals of their rights to control data collected about them. In October 2019, the government of Canada rejected a proposal by Alphabet/Sidewalk Labs to create a data trust for Toronto’s smart city project. Sidewalk Labs had designed the trust in a way that secured the company’s control over citizens’ data. And India’s data trust faced criticism for giving the government unrestricted access to personal information by defining authorities as “information fiduciaries.”

One possible solution could be to set up an ecosystem of data stewards, both public and private, that each serve different needs. Sylvie Delacroix and Neil Lawrence, the originators of this bottom-up approach, liken data trusts to pension funds, saying they should be tightly regulated and able to provide different services to designated groups.

When put into practice, the EU’s Trusts Project will likely change the privacy landscape on a global scale. Unfortunately, however, this new approach won’t necessarily give European citizens more privacy or control over their information. It is not yet clear what model of trusts the project will pursue, but the policies do not currently provide any way for citizens to opt out.

At a recent congressional antitrust hearing in the United States, four major platform companies publicly recognized the use of surveillance technologies, market manipulation, and forceful acquisitions to dominate the data economy. The single most important lesson from these revelations is that companies that trade in personal data cannot be trusted to store and manage it. Decoupling personal information from the platforms’ infrastructure would be a decisive step toward curbing their monopoly power. This can be done through data stewardship.

Ideally, the Trusts Project would show the world a more equitable way to capture and distribute the true value of personal data. There’s still time to deliver on that promise.

Anna Artyushina is a public policy scholar specializing in data governance and smart cities. She is a PhD candidate in science and technology studies at York University in Toronto.

Source link

قالب وردپرس

Continue Reading


Linux Fu: Remote Execution Made Easy



If you have SSH and a few other tools set up, it is pretty easy to log into another machine and run a few programs. This could be handy when you are using a machine that might not have a lot of memory or processing power and you have access to a bigger machine somewhere on the network. For example, suppose you want to reencode some video on a box you use as a media server but it would go much faster on your giant server with a dozen cores and 32 GB of RAM.

Remote Execution

However, there are a few problems with that scenario. First, you might not have the software on the remote machine. Even if you do, it might not be the version you expect or have all the same configuration as your local copy. Then there’s the file problem. the input file should come from your local file system and you’d like the output to wind up there, too. These aren’t insurmountable, of course. You could install the program on the remote box and copy your files back and forth manually. Or you can use Outrun.

There are a few limitations, though. You do need Outrun on both machines and both machines have to have the same CPU architecture. Sadly, that means you can’t use this to easily run jobs on your x86-64 PC from a Raspberry Pi. You’ll need root access to the remote machine, too. The system also depends on having the FUSE file system libraries set up.

A Simple Idea

The idea is simple. You could do a video encoding like this:

outrun [email protected] ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4

This will work even if ffmpeg isn’t on the remote machine and the input and output files will be on your local box where you expect them. Here’s a screencast from the project’s GitHub page:

A Complex Implementation

How does this work? A FUSE file system mounts your local filesystem remotely using a lightweight RPC file system. Then a chroot makes the remote machine look just like your local machine but — presumably — faster. There are a few other things done, such as setting up the environment and current directory.

The chroot, by the way, is why you need root on the remote machine. As an ordinary user, you can’t pivot the root file system to make this trick work.

To improve performance, Outrun caches system directories and assumes they won’t change over the life of the command. It also aggressively prefetches using some heuristics to guess what files you’ll need in addition to the one that the system asked for.

The Future

We wish there was an option to assume the program will execute on the remote machine and only set up the input and output files. This would make it easier to do things like slice a 3D print on a remote PC from a Raspberry Pi running Octoprint, for example. Of course, this is all open source, so maybe we should go make that fix ourselves.

Then again, you could do something like this pretty easily with sshfs and some other tricks. If you want to run a program on a bunch of remote machines, there are ways to do that, too.

Source link

قالب وردپرس

Continue Reading