Question

We have studied seven technological revolutions in communications during the early weeks of the semester. Each...

We have studied seven technological revolutions in communications during the early weeks of the semester. Each of these in its time was a disruptive technology.

            a) Describe each of the seven technological revolutions in communications, their impacts on society, and why each was a disruptive technology.

            b) Over time various people, groups, organizations, and entrepreneurs sought to use and sometimes to control each of the new technologies. Describe the stages of development that several of these key technologies have followed. Ultimately, governments and courts seek to regulate most of these technologies by creating a set of rights and obligations. Why? What types of regulations have emerged?    

            c) Explain why is it difficult for the legal system to react to the newest technologies, now appearing almost daily? What issues need to be addressed by the new advances in artificial intelligence (AI) and its many applications. Consider, autonomous vehicles and other applications of AI in your answer. What characteristics of AI make it particularly difficult for the legal system? What governmental entities should be making these decisions?                        

Homework Answers

Answer #1

a)

1.Computing capabilities, storage and access

Between 1985 and 1989, the Cray-2 was the world’s fastest computer. It was roughly the size of a washing machine. Today, a smart watch has twice its capabilities.

As mobile devices become increasingly sophisticated, experts say it won’t be long before we are all carrying “supercomputers” in our pockets. Meanwhile, the cost of data storage continues to fall, making it possible keep expanding our digital footprints.

Today, 43% of the world’s population are connected to the internet, mostly in developed countries. The United Nations has set the goal of connecting all the world’s inhabitants to affordable internet by 2020. This will increase access to information, education and global marketplaces, which will empower many people to improve their living conditions and escape poverty.

Imagine a world where everyone is connected by mobile devices with unprecedented processing power and storage capacity. If we can achieving the goal of universal internet access and overcome other barriers such as digital illiteracy, everybody could have access to knowledge, and all the possibilities this brings.

Privacy and security concerns arise in the development and use of computational systems and artifacts. Aggregation of information including geo-location, cookies, and browsing history raises privacy and security concerns. Anonymity in online interactions can be enabled through the use of online anonymity software and proxy servers. Technology enables collection, use, and exploitation of information about, by, and for individuals, groups, and institutions. People can have instant access to vast amounts of information online; accessing this information can enable collection of both individual and aggregate data that can be used and collected. Commercial and governmental curation of information may be exploited if privacy and other protections are ignored. Targeted advertising is used to help individuals, but it can be misused at both individual and aggregate levels.

2.Big data

Each time you run a Google search, scan your passport, make an online purchase or tweet, you are leaving a data trail behind that can be analysed and monetized. Thanks to supercomputers and algorithms, we can make sense of massive amounts of data in real time. Computers are already making decisions based on this information, and in less than 10 years computer processors are expected to reach the processing power of the human brain.

This means there’s a good chance your job could be done by computers in the coming decades.

Disruptive innovations are:

  • More accessible (with respect to distribution or usability)
  • Cheaper (from a customer perspective)
  • And utilize a business model with structural cost advantages (with respect to existing solutions) than their existing counterparts in the market.

The reason why the above characteristics of disruption are important is that when all 3 exist, it is very difficult for an existing business to stay in competition. Whether an organization is saddled with an outmoded distribution system, highly trained specialist employees or a fixed infrastructure, adapting quickly to new environments is challenging when one or all of those things become outdated. Writing off billions of dollars of investment, upsetting the distribution partners of your core business, firing hundreds of employees – these things are difficult for managers to examine, and with good reason.

Every day, new technologies emerge. The vendors in the market get shaken only if the technological innovation is extremely powerful. Big Data technologies such as NoSQL and Hadoop could be seen as catalysts for this type of innovation. We should understand here that big data is just raw data. The disruptive innovation coming from big data are big data analytics processes and technologies.

3. Digital health

Analysing medical data collated from different populations and demographics enables researchers to understand patterns and connections in diseases and identify which conditions improve the effectiveness of certain treatments and which don’t.

Big data will help to reduce costs and inefficiencies in healthcare systems, improve access and quality of care, and make medicine more personalized and precise.

In the future, we will all have very detailed digital medical profiles ... including information that we’d rather keep private.

Digitization is empowering people to look after their own health. Think of apps that track how much you eat, sleep and exercise, and being able to ask a doctor a question by simply tapping it into your smartphone.

In addition, advances in technologies such as CRISPR/Cas9, which unlike other gene-editing tools, is cheap, quick and easy to use, could also have a transformative effect on health, with the potential to treat genetic defects and eradicate diseases.

On the road to digital transformation, the healthcare industry is changing how physicians and healthcare systems are diagnosing diseases, treating patients, and monitoring their conditions on an ongoing basis. Technologies such as smartphone based apps, chatbots, artificial intelligence, and medical devices connected by IoT will continue to disrupt patient care. Innovation needs to continue to focus on making changes to improve healthcare and solve longstanding problems.

4.The digitization of matter

3D printers will create not only cars, houses and other objects, but also human tissue, bones and custom prosthetics. Patients would not have to die waiting for organ donations if hospitals could bioprint them.

In fact, we may have already reached this stage: in 2014, doctors in China gave a boy a 3D-printed spine implant, according to the journal Popular Science.

The 3D printing market for healthcare is predicted to reach some $4.04 billion by 2018. According to a survey by the Global Agenda Council on the Future of Software and Society, most people expect that the first 3D printed liver will happen by 2025.

The survey also reveals that most people expect the first 3D printed car will be in production by 2022.

Three-dimensional printing, which brings together computational design, manufacturing, materials engineering and synthetic biology, reduces the gap between makers and users and removes the limitations of mass production.

Consumers can already design personalized products online, and will soon be able to simply press “print” instead of waiting for a delivery.we’re all familiar with digitized text, digitized audio, and digital video. One of the profoundly interesting and important things going on these days is that lots of other information is being digitized. Our social interactions are being digitized, largely thanks to all the different social networks and social media that we have. The attributes of the physical world are being digitized, thanks to all of these sensors that we have for pressure, temperature, force, stress, strain, you name it. Our whereabouts are being digitized, thanks to GPS systems and smartphones.

The challenge comes from the fact that if this encroachment really is happening quicker, more broadly, and deeper than before, the phenomenon is that technology is going to race ahead, but it could leave a lot of people behind in the capacity of folks who want to offer their labor to the economy. And how we deal with that challenge and what we do about the fact that technology is racing ahead but leaving some people, potentially a lot of them, behind is one of the great challenges for our generation.

5. The internet of things

Within the next decade, it is expected that more than a trillion sensors will be connected to the internet.

If almost everything is connected, it will transform how we do business and help us manage resources more efficiently and sustainably. Connected sensors will be able to share information from their environment and organize themselves to make our lives easier and safer. For example, self-driving vehicles could “communicate” with one another, preventing accidents.

By 2020 around 22% of the world’s cars will be connected to the internet (290 million vehicles), and by 2024, more than half of home internet traffic will be used by appliances and devices.

Internet of Things (IoT) has been identified as a disruptive technology because of its potential to penetrate every aspect of our lives and generate new business opportunities. However, existing IoT architectures focus mainly on reliable end-to-end communications with a lot of emphasis on end-to-end interoperability of heterogeneous data formats, hardware and software components from different platforms and also the security and privacy of data and network

6. Blockchain

Only a tiny fraction of the world’s GDP (around 0.025%) is currently held on blockchain, the shared database technology where transactions in digital currencies such as the Bitcoin are made.

But this could be about to change, as banks, insurers and companies race to work out how they can use the technology to cut costs.

A blockchain is essentially a network of computers that must all approve a transaction before it can be verified and recorded.

Using cryptography to keep transactions secure, the technology provides a decentralized digital ledger that anyone on the network can see.

Before blockchain, we relied on trusted institution such as a bank to act as a middleman. Now the blockchain can act as that trusted authority on every type of transaction involving value including money, goods and property.

The uses of blockchain technology are endless. Some expect that in less than 10 years it will be used to collect taxes. It will make it easier for immigrants to send money back to countries where access to financial institutions is limited.

And financial fraud will be significantly reduced, as every transaction will be recorded and distributed on a public ledger, which will be accessible by anyone who has an internet connection.

Blockchain has been considered a disruptive technology compared to the Internet, promising innovation in the financial and commercial area comparable to the impact that the Web had on communication. It stands to revolutionise the way we interact with each other based on three main concepts:

  • Track and data store – the decentralised and distributed system across an extensive network of computers becomes a safe way to track data changes over time.
  • Trust – it is the key concept. The system allows us to interact directly with our data in real-time and the network, all the computers verifies the changes in the transactions which creates trust in the data.
  • Peer-to-peer transactions – in this system there is no more intermediaries. Instead of sharing our data with an intermediary such as a bank or a lawyer, we will share it directly with peers. It is a new way to access, verify and transact with each other.

7. Wearable internet

Technology is getting increasingly personal. Computers are moving from our desks, to our laps, to our pockets and soon they will be integrated into our clothing.

By 2025, 10% of people are expected to be wearing clothes connected to the internet and the first implantable mobile phone is expected to be sold.

Implantable and wearable devices such as sports shirts that provide real-time workout data by measuring sweat output, heart rate and breathing intensity are changing our understanding of what it means to be online and blurring the lines between the physical and digital worlds.

The potential benefits are great, but so are the challenges.

These devices can provide immediate information about our health and about what we see, or help locate missing children. Being able to control devices with our brains would enable disabled people to engage fully with the world. There would be exciting possibilities for learning and new experiences.

Fitness and lifestyle trackers are all the rage, and there are many to choose from, including the Fitbit, the Nike Fuelband, the Adidas Fit Smart, the Samsung Gear Fit, the Misfit Shine and the Jawbone Up, among many others. They do things like track your workout time, steps (like a pedometer), distance and calories burned, as well as measure your heart rate and monitor your sleep patterns. Some work in conjunction with apps on your smartphone or an online portal where you can track your data, set your goals and possibly do things like log dietary information.

The devices and apps can use the gathered data to cue you to increase or decrease your workout intensity, let you share data with other users for accountability and motivation and, in the cast of at least one company (GOQii), get you in touch with an experienced fitness coach who monitors your data, sends advice and responds to questions (for a recurring fee).

Some of these devices are worn on your wrist or ankles, some wrap around your chest and others clip onto your clothing. They may have small screens, LED status lights or no display at all. Some require plugging in to upload your data and some sync wirelessly and automatically. Some work with only one operating system while others work with several.

Many modern smartphones even have sensors now that allow phone apps to perform some of these functions, like tracking your routes or your steps. Some even have heart rate checking capabilities.

These or similar innovations could disrupt personal training and other fitness related jobs, although there are some things a wearable device or app are not going to be able to do, like make sure you're using good form -- at least for now.

b)

In order to implement "privacy" in a computer system, we need a more precise definition. We have to decide when and under what conditions to give out personal information. Specifically, we must decide when to allow anonymous transactions and when to require accountability. If there are subgroups in society, or countries, with differing ideas about the answers to these questions, technology can, to a large extent, accomodate each group. There does not necessarily have to be only one privacy regime. Less law and more user choice is possible now; technology can provide every user with controls fine-tuned for the balance of privacy and accessibilty that they prefer.

b.1.ANONYMITY VS. ACCOUNTABILITY

Individuals sometimes choose to remain anonymous to safeguard their privacy, for example, when browsing in a department store or purchasing an "adult" magazine. Browsing the Web has also, to date, usually been an anonymous activity. Moving beyond the Web to the Internet in general, one can send anonymous messages using an anonymous remailer program. It is fairly easy today for a technically sophisticated person to remain anonymous and avoid accountability on the Internet for actions which are questionable or illegal, e.g., sending advertising mail to numerous newsgroups (spamming), running a pornography server, or hacking the Web page of another person.

But technology can promote accountability as well as anonymity. If computer systems or applications require "proof" of identity before allowing use, we will have a much more accountable society. It would be as if cars would only start when driven by "authorized" drivers; mere keys would not work. On the other hand, usability and privacy would suffer--imagine having to authenticate yourself to a pay phone or to a rental car!

Accountability should not always be required. Anonymous leafleting and other modes of expression are properly strongly protected by the U. S. Constitution. An appropriate balance must be struck by the community. Then the technology can enforce that balance.

b.2.PRIVACY THREATS FROM TODAY'S COMPUTER SYSTEMS

The Privacy Act of 1974 [Privacy 1974] and data protection legislation in other countries has to some extent defused criticism and concern about potential government invasion of privacy. Indeed, medical, credit, and marketing databases appear to be as troublesome as governmental databases. Some private endeavors have already raised significant privacy concerns in the Internet community.

The Lotus MarketPlace: Households database was going to make names, addresses, demographic and prior purchase behavior data for 120 million U.S. consumers available on a CD-ROM in 1991. Consumers objected to the secondary use of identifiable personal information without their consent. Individual credit reports provided the basis of the MarketPlace data and, as a result, a fundamental privacy principle, that personal information collected for one purpose should not be used for other purposes without the consent of the individual, was violated.

b.3 TECHNOLOGICAL SAFEGUARDS

There are a number of technological mechanisms which enhance computer security and thus increase individual privacy in systems. This paper only highlights a few which are relevant to our topic. There is a wealth of computer security literature for the reader desiring additional information [Pfleeger 1996, Russell 1991].

Authentication

There are typically three types of authentication mechanisms: something you know, something you have, or something you are. After individual recognition of a person, the most common mechanism is the password. For a variety of technical reasons, passwords alone will not be secure enough in the long run. Slowly we are going to evolve from these systems which only demand "something you know" (e.g., passwords) to those which also require "something you have" or "something you are." Thus, we will see more and more computers built with the capability to read an electronic card in the possession of the user, just like an automated teller machine at a bank. Sometimes this card will automatically transmit the password, and sometimes, for greater security, the user will have to enter his password separately, in addition to possessing the physical card.

We may also see the further development of biometrics (e.g., fingerprints, pronounciation, retinal patterns) as authentication mechanisms. These already exist, but their application has been limited, due to user acceptance problems. California and some other states now require fingerprints on their drivers licenses. Since most users won't want to carry several cards but would prefer one all-purpose card (in battles between utility and security, utility almost always prevails), additional privacy questions appear: why not have one central authority (a government?) issue a universal (money) card (and driver's license)? The advantages are less cards and account numbers, and more efficiency for the system and for its users. The disadvantages are the potential for nonresponsive bureaucracies to develop and for abuse of power by a rogue government. Society has to decide whether the (financial and social) costs of maintaining multiple separate identity regimes are worth the (privacy) benefits?

Cryptography

Manual encryption methods, using codebooks, letter and number substitutions, and transpositions can be found in writings of the Spartans, Julius Caesar, Thomas Jefferson, and Abraham Lincoln. Cryptography has often been used in wartime, and critical victories (such as that of the United States at the Battle of Midway in World War II) depended on successful analysis (codebreaking) of the German encryption method.

There are two kinds of cryptographic systems--secret key and public key. In secret key systems, a secret key--a specially chosen number--when combined with a set of mathematical operations, both "scrambles" and "unscrambles" hidden data. The key is shared among consenting users. In public key systems, each user has two numeric keys--one public and one private. The public key allows anybody to read information hidden using the sender's private key, thus allowing authentication of messages (electronic signatures) in addition to confidentiality. The private key is kept secret by the user.

Many cryptographic systems today use a combination of public key and secret key encryption: secret key encryption is used to encrypt the actual message, and public key encryption is used for sender authentication, key distribution (sending secret keys to the recipient), and digital signatures. This hybrid combination of the two encryption technologies uses the best of each while simultaneously avoiding the worst. It is the basic method of sending secure messages and files from anywhere to anywhere over unsecured networks. As long as sender and recipient ensure that their private keys are exclusively in their possession, this process will work every time, yet thwart any would-be attacker. It can be used to send (and keep secret) a two-line message or a two-hour movie, and anything in between.

Today, cryptography also is often used to prevent an intruder from substituting a modified message for the original one (to preserve message integrity) and to prevent a sender from falsely denying that he or she sent a message (to support nonrepudiation). If data is deemed to be "owned" by individuals, and royalties paid, then we can use encryption technology to digitally sign individual pieces of data, effectively placing taggants with the data. Thus one could always trace the data back to its source.

Cryptographic procedures, or algorithms, are (or can be) public in most cases; the security of the system depends on users keeping keys, which are used with the (public) algorithms, secret.

   Firewalls/Authorization

Increasing numbers of users and computers are being checked for authorization before being allowed to interact with internal corporate, university, or government systems and obtain information from them. Traditional operating system and data base management controls have been joined recently by firewalls which check that only properly authorized (and sometimes paid-up) users are allowed access.

Cookie cutters

Often, Web servers keep records of what a user has done ("cookies") in order to better serve the user when he or she visits the site again. This capability can be abused, and thus most browsers now allow users to refuse to give Web servers this information. Occasionally, this will result in a denial of service to the user. But it is the user's choice, not the system's.

c)

Given the scale and complexity of the global economy as well as our knowledge about human nature, it would be extremely naïve to rely simply on spontaneous and voluntary ethical behaviour by individuals and corporations to ensure fairness or improve human dignity. Regulation, combined with serious enforcement, is required to guide our behaviour and ensure the rule of law.

However, this approach has often resulted in a cat-and-mouse game between regulators and economic actors. Law-abiding individuals and corporations spend inordinate amounts of time and money in search of legal loopholes in order to achieve technical compliance only, while others abuse the legal framework so that their criminal activities can remain undetected.

acebook has been continuously embroiled in data privacy issues. The most current example is the case involving Cambridge Analytica and the 2016 presidential election in the United States. In 2013, Cambridge psychology professor Aleksandr Kogan obtained permission from Facebook to mine data through a seemingly harmless app that matched a personality quiz with the user’s Facebook likes and dislikes.

Facebook argues that there was no data breach involved. Kogan obtained permission and those Facebook users who took the quiz gave their consent. As one of the Facebook users who completed the quiz at the time, I can attest to the fact that I had to give my consent and that I was fascinated by the results. However, the subsequent sale of the data by Kogan to Cambridge Analytica violated Facebook’s policies. The ethical and legal issues involved in the case are complex, and the overall remedial action taken by the company has addressed the broader privacy agenda. Facebook has taken steps to ensure more transparency in terms of advertisements, and has indicated that they will do this regardless of possible new legal requirements to do so, such as the Honest Ads Act in the United States.

Facebook CEO Mark Zuckerberg has publicly supported more regulation, but has also expressed a preference for flexible guidelines, rather than the model provided by- for example - Germany’s Network Enforcement Act. According to the act, also referred to as the “Facebook Act”, social media companies have to remove offensive posts within 24 hours or face fines of up to €50 million.

Now, positive expectations for the evolution of humans and AI;

  • “AI will help people to manage the increasingly complex world we are forced to navigate. It will empower individuals to not be overwhelmed.”
  • “AI will reduce human error in many contexts: driving, workplace, medicine and more.”
  • “In teaching it will enhance knowledge about student progress and how to meet individual needs; it will offer guidance options based on the unique preferences of students that can guide learning and career goals.”
  • “2030 is only 12 years from now, so I expect that systems like Alexa and Siri will be more helpful but still of only medium utility.”
  • “AI will be a useful tool; I am quite a ways away from fearing SkyNet and the rise of the machines.”
  • “AI will produce major benefits in the next 10 years, but ultimately the question is one of politics: Will the world somehow manage to listen to the economists, even when their findings are uncomfortable?”
  • “I strongly believe that an increasing use of numerical control will improve the lives of people in general.”
  • “AI will help us navigate choices, find safer routes and avenues for work and play, and help make our choices and work more consistent.”
  • “Many factors will be at work to increase or decrease human welfare, and it will be difficult to separate them.”

The increasing role of AI in the economy and society presents both practical and conceptual challenges for the legal system. Many of the practical challenges stem from the manner in which AI is researched and developed and from the basic problem of controlling the actions of autonomous machines. The conceptual challenges arise from the difficulties in assigning moral and legal responsibility for harm caused by autonomous machines, and from the puzzle of defining what, exactly, artificial intelligence means. Some of these problems are unique to AI; others are shared with many other postindustrial technologies. Taken together, they suggest that the legal system will struggle to manage the rise of AI and ensure that aggrieved parties receive compensation when an AI system causes harm.

The most obvious feature of AI that separates it from earlier technologies is AI’s ability to act autonomously. Already, AI systems can perform complex tasks, such as driving a car and building an investment portfolio, without active human control or even supervision.The complexity and scope of tasks that will be left in the hands of AI will undoubtedly continue to increase in the coming years. Extensive commentary already exists on the economic challenges and disruptions to the labor market that these trends are already bringing about, and how those trends are likely to accelerate going forward. Just as the Industrial Revolution caused socioeconomic upheaval as mechanization reduced the need for human manual labor in manufacturing and agriculture, AI and related technological advances will reduce the demand for human labor in the service sector as AI systems perform tasks that once were the exclusive province of well educated humans. AI will force comparably disruptive changes to the law as the legal system struggles to cope with the increasing ubiquity of autonomous machines.

From a legal perspective, the takeaway from the chess and C-Path anecdotes is not the (mis)impression that the AI systems displayed creativity, but rather that the systems’ actions were unexpected certainly to outside observers, and perhaps even to the systems’ programmers. Because AI systems are not inherently limited by the preconceived notions, rules of thumb, and conventional wisdom upon which most human decision-makers rely, AI systems have the capacity to come up with solutions that humans may not have considered, or that they considered and rejected in favor of more intuitively appealing options. It is precisely this ability to generate unique solutions that makes the use of AI attractive in an ever-increasing variety of fields, and AI designers thus have an economic incentive to create AI systems capable of generating such unexpected solutions. These AI systems may act unforeseeably in some sense, but the capability to produce unforeseen actions may actually have been intended by the systems’ designers and operators.

Know the answer?
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for?
Ask your own homework help question
Similar Questions