Category: IP and Technology Blog

  • by 
  • Big data is the large scale collection of a wide range of data points. These data points are being collected constantly, from nearly every facet of modern life. Every time a person searches for something on the internet, shops online, posts or browses on social media, uses a GPS on a phone or the GPS in a vehicle, data is being collected, along with hundreds of other collection points. With the rise of the Internet of Things, even more data will be collected in the coming months and years. Data can even be collected from areas that appear unconventional; “data points” include things like placing sensors on bridges and tracking when that bridge is likely to fail and requires maintenance.

    The collection of data has been on the rise in recent years due in large part to the steadily decreasing cost of technology and the cost of data storage. In the past, big data, and the power of predictive analytics that comes along with big data, was only available to large corporations and portions of the government. With the increased ability to collect and store more information, the use of big data and predictive analytics has spread to smaller companies and organizations.

    What type of information is being collected?

    Information is being collected nearly all the time, anytime an electronic device is utilized the data is likely being collected. Information may even be collected without direct use of an electronic device. Some make the distinction between data that is “born digital” and data that is “born analog”. Information that is born digital was created, either by humans or by computers, specifically for digital use by a computer or other processing system. Some examples of information that is “born digital” includes emails; text messages; GPS locations; metadata associated with phone calls, including the numbers dialed and length of calls; data associated with typical commercial transactions including credit card swipes, bar-code scans; data from cars, televisions and appliances in connection with the Internet of Things; and much more.

    Information that is born analog comes about from characteristics of the physical world and the information cannot be accessed electronically until a sensor is applied. The sensor is any device that observes the physical impacts and then converts those impacts into a digital form. Some examples of information that is “born analog” includes the voice content of a phone call; personal health data including heartbeat, respiration, number of steps taken; video from surveillance cameras; cell phones; drones; microphones; cameras; medical imaging, and more.

    What do companies and the government do with the data?

    The most common use of data points and underlying information is to create profiles on individuals which can then be utilized in predictive analytics. Predictive analytics is the use of data points to essentially predict future outcomes by applying algorithms to the data points. These algorithms can then be used in virtually all areas. In marketing, predictive analytics can be used to determine customers that are the most likely to purchase specific products. In an employment setting, predictive analytics can be used to find the ideal candidates that will excel in a given position. Predictive analytics are being used in education to help college students determine their major, as well as which courses they are likely to do well in, and the courses they are likely to fail. Predictive analytics are also utilized in crime and terrorist prevention areas. In these areas, the data that is collected can be used to try and identify high crime areas, and create profiles on individuals that are likely to commit crimes or engage in criminal activity.

  • by 
  • U.S. District Judge Alison Nathan recently declared bitcoin was money and qualified as “funds” under federal law adding to the confusion to bitcoin’s classification. The decision is at odds with other recent cases, including a case in Florida, which held that bitcoin is not money. The recent Second Circuit decision now brings bitcoin within the meaning of money and funds under federal anti-money laundering statutes, similar to the conclusion reached by the Second Circuit in a 2014 case.

    Judge Nathan’s conclusion that bitcoin is money followed her rejection of a motion to dismiss charges related to operating an unlicensed currency exchange in which the plaintiff argued bitcoin did not classify as money under the statute. The ability to use bitcoin as a currency and a store of value was central to Judge Nathan’s decision. She stated, “Bitcoins can be accepted as a payment for goods and services or bought directly from an exchange…They therefore function as pecuniary resources and are used as a medium of exchange and a means of payment.”

  • by 
  • Software as a Service (“SaaS”) agreements are entered into between a provider and a customer, and the agreement allows for the customer to utilize the service that is offered by the provider. Some of the more important aspects that a provider should be sure to address in a SaaS agreement are detailed below.

    1. Data Backup

    Generally, a provider will want to include a section that explains how their service backs up data. A common section will include an explanation that while the provider will perform routine data backups, the consumer is still the one responsible for performing regular backups of their own to ensure that no customer data is lost or damaged. In short, this section will help to protect a provider from liability for customer data that is not properly backed up, and therefore is damaged, lost, or otherwise altered.

    1. Intellectual Property Rights

    An intellectual property rights section is present in nearly all SaaS agreements. This section clarifies who owns the information that is generated within the service that is being offered by the provider. Typically, the provider will own all of the intellectual property rights in the actual service, and the customer will own any of the specific data that they place into the service.

    Without a clearly defined intellectual property rights section a provider could open themselves up to future issues over use of information within the service. It is also common for this section to include a consent by the customer which allows the provider to use the customer’s data in ways that are useful or necessary to perform the service that is being provided.

    1. Authorization Limitations and Restrictions

    The SaaS agreement allows the customer to access and use the provider’s service. However, there are common limits and restrictions on the ways in which the customer can use the service, as well as the ways in which the customer can access the data underlying the service. Typically, an authorization limitations and restrictions section will explicitly list ways in which the customer shall not access or use the service, and its underlying data.

    Some commonly prohibited behaviors include: copying or modifying any aspect of the service; renting, leasing, selling, sublicensing, or assigning any aspect of the service; reverse engineering, decoding, or attempting to gain access to the source code of the service; breaching any security device or protections used by the service or the provider; inputting, uploading or providing to or through the service any information or materials that are unlawful or contain or activate any harmful code.

    1. Service Levels

    Providers typically include a section in the SaaS agreement that protects them if their service is briefly inaccessible or unavailable to the customer. A service levels section protects the provider from unforeseen circumstances that may impact the availability of their service to the customer. This section will typically list out some exceptions to the availability of the service, things such as failure of the customer to comply with the agreement, customer failure, issues with the internet access of the customer, force majeure events (essentially forces of nature such as a large storm, or other similar event), failure of other service or hardware that is not provided by the provider, and scheduled downtime are all commonly listed. This section of the SaaS agreement allows for brief periods of inaccessibility of the service, thus protecting the provider from unforeseen circumstances, and customer use errors or issues that prevent the customer from continuous access to the service.

    1. Termination and Suspension or Termination of Services

    The typical SaaS agreement will have a section on termination of the contract, as well as a section that allows for the provider to suspend or terminate the services. The termination section allows the provider to lay out specific instances where the contract, and thus access to the service, can be stopped.

    This section typically makes it clear what a customer should do with the data, intellectual property rights or other information that the customer has learned, or had access to, through the use of the service from the provider. The termination section also allows for the provider to disable all customer access to the services. Typically both the provider and the customer have the ability to terminate the agreement upon 30 days notice.

    Generally, the suspension and termination section gives the provider the ability to suspend or terminate customer access or ability to use any part of, of the all aspects of, the service. The section enumerates instances that would allow for suspension or termination of the service, common instances are as follows: judicial or governmental order or law enforcement request that requires provider to do so; good faith and reasonable belief on the part of provider that the customer has – failed to comply with material terms of the contract; accessed or used the service beyond the scope of the agreement; been, or is, likely involved in fraudulent, misleading, or unlawful activities in connection with the services; or the agreement expires or is terminated.

  • by 
  • Introduction

    Much of the world economy has become reliant on the quick and safe transfer of data. Transferred data includes everything from an online employee directory to individual social media profiles to billing and payment information. Everyday, companies in the United States send and receive the personal data of millions of European Union citizens. Various international agreements and legal instruments have been used in the past to protect personal data and to ensure it is securely handled. The Safe Harbor Agreement (“Safe Harbor”) between the US and the EU was one of these pieces of regulation, and the EU-US Privacy Shield (“Privacy Shield”) has been proposed to replace Safe Harbor.

    The Outgoing Safe Harbor

    In 2009, the European Commission (“EC) enacted the Directive on Data Protection (the “Directive”). This Directive established standards that non-EU countries had to meet before they would be permitted to transfer personal data on EU citizens. In order to comply with these standards, the US and the EC developed the Safe Harbor framework. This framework was designed to help US organizations join the Safe Harbor program and make them eligible to transfer personal data of EU citizens.

    Under Safe Harbor, only US organizations that were regulated by the FTC or certain organizations under the jurisdiction of DOT were eligible to participate in the Safe Harbor program. According to Export.gov, organizations such as financial institutions, telecommunication carriers, labor associations, and non-profits among others were not the subject of FTC or DOT jurisdiction and therefore were ineligible to participate in Safe Harbor. Since 2009, over 4,400 US organizations have transferred personal data from the EU to the US.

    In October 2015, Safe Harbor was essentially invalidated by the Court of Justice of the European Union (“CJEU”) which questioned whether the EC thoroughly appraised the United States’ level of protection of private data of EU citizens. These concerns gained traction in the wake of the Edward Snowden’s revelations about US government agencies. Essentially, the CJEU was concerned that US government agencies may be requesting data that if provided would place US organizations in violation of major components of Safe Harbor.

    The Incoming Privacy Shield

    Following the decision from CJEU, the US Department of Commerce and the EC negotiated a successor to Safe Harbor. The Privacy Shield, Safe Harbor’s replacement, includes stronger obligations on companies processing data of EU citizens, new safeguards and transparency obligations for US government agencies, as well as a redesigned system to process complaints. The new agreement was designed to offer a more durable agreement that echoes both the US’ and EU’s privacy and security values. Privacy Shield, at this point, is designed to impact the same businesses impacted by Safe Harbor. Impacted businesses include businesses and organizations regulated by the FTC and DOT.

    Implications on US Organizations

    While there may be wide ranging implications for US organizations, especially cloud based computing companies, on the way the conduct business, interact with EU citizens, and structure privacy agreements, there is significant speculation as to whether or not the proposed Privacy Shield agreement will ultimately pass the same assessment that shot down the Safe Harbor agreement.

    First, a significant portion of the agreement must still be drafted before the agreement is even presented to the European Parliament and EU countries for approval. The drafting process is not expected to be finished until April 2016 at the earliest. In the interim, companies should continue to comply with the outgoing Safe Harbor regulations until the Privacy Shield agreement goes into effect, if it does.

    According to the Information Technology and Innovation Foundation (“ITIF”), companies should fully and carefully evaluate the new regulations of Privacy Shield before self-certifying. ITIF stated that “onward transfers” are one of many items that the Privacy Shield will protect and enforce more strictly. An onward transfer involves exporting data from the EU to the US where it is then passed onto a third party. This third party, who may not even be certified under the Privacy Shield program (or even Safe Harbor), will likely be required to provide assurances that the data will be protected once delivered to them. Similar to Safe Harbor though, it is anticipated US businesses will still be able to self certify under Privacy Shield.

    Additionally, there is reason to believe that US organizations and businesses will be permitted a reasonable amount of time to transition and gain required certification. It is unlikely that normal business operations will be impacted or halted simply to gain approval. However, it is obviously recommended that once Privacy Shield is enacted, businesses begin any required transitions and approval processes as soon as possible.

    Ultimately, experts have a few recommendations for American organizations and businesses. In the interim, organizations should continue to abide by any agreements made under Safe Harbor and continue to follow Safe Harbor protocols. Experts also recommend businesses stay in close contact with their lawyers to ensure they update agreements, protocols, and procedures as advances to and details about Privacy Shield become public. Ultimately, businesses should remember that Privacy Shield has not and ultimately could not be signed into effect and therefore businesses should accommodate for any delays, interruptions, or other business events that could arise if the Privacy Shield ultimately fails to clear a necessary hurdle.

    Skepticism Regarding the Future of Privacy Shield

    There is speculation as to whether the Privacy Shield will pass the required approvals and be enacted. The Privacy Shield faces a very long and complex approval process which includes securing approval from EU governmental agencies, quasi-governmental agencies, the EU, as well as the EU member countries.

    Of most significant concern however, may deal with who the new regulations impact. The CJEU’s conclusion regarding Safe Harbor seemed to indicate shortcomings relating to the US government and data requests made by the US government to private sector organizations. However, many of the proposed requirements under the Privacy Shield impact only the private sector and do not address the CJEU’s concerns about how the US government potentially endangered the security of EU citizens’ data under Safe Harbor. These lack of changes could ultimately lead the CJEU to arrive at the same conclusion for Privacy Shield as they did for Safe Harbor.

  • by 
  • Introduction

    Two doctors have implemented blockchain technology into clinical trials in an effort to increase the trustworthiness of clinical trial results. Greg Irving along with John Holden have incorporated blockchain technology into clinical trials with the intention of addressing major concerns which plague clinical trials including outcome switching, data dredging, and selective publication. Drug companies and researchers are often faced with high incentives to fudge trial data and manipulate protocols in order to yield the desired clinical trial results.

    Enforcement of existing standards is difficult. Despite international regulations requiring trial protocols to be disclosed prior to the commencement of the trials, researchers are still able to manipulate results and change trial protocols to yield desired outcomes. Enforcement is difficult because detection of these fraudulent changes is costly and challenging to uncover when researchers control the clinical trial databases and protocols.

    Using blockchain technology, Irving and Holden have shifted the clinical trial data to a distributed ledger open to the public. This reduces chances of fraud, error, and costs. This shift to blockchain technology adds a third party who can audit and validate outcomes. Essentially Irving and Holden have created a “bitcoin notary service” for clinical trials. This new method represents a much more transparent and reliable platform for verifying clinical trial data.

    How it Works

    The system pioneered by Irving and Holden relies on existing blockchain technology. Under the system designed by Irving and Holden, the clinical trial protocol is given a unique digital signature and recorded on the blockchain, resulting in a unique bitcoin key for the clinical trial protocol. To verify that the clinical trial protocol, data, or trial results have not been altered, anyone can use the original clinical trial protocol’s unique digital signature to create a new bitcoin key. If the newly created bitcoin key differs from the existing bitcoin key on the blockchain, then changes have been made to the protocol. These changes could represent fraud or other unauthorized changes, ultimately undermining the trustworthiness of that clinical trial. However, if the newly created bitcoin key is identical to the bitcoin key on the blockchain, then this would serve as verification that the clinical trial protocol, data, or trial results were not altered or manipulated in anyway. This system was recently applied to a cardiovascular diabetes trial which passed peer review on F1000Research, an open science publishing platform. The successful implementation of Irving and Holden’s system into a clinical trial could lead to possible adoption on a broader scale in the future.

    Impact of Incorporating Blockchain Technology into Clinical Trials

    The most significant advantage of utilizing blockchain technology in clinical trials is increased transparency and trustworthiness of clinical trial protocols and results. Researchers will no longer be able to hide side effects of drugs that come to light in the trials or selectively report clinical trial data. Selective disclosure of clinical trial data is a major concern facing clinical trials, a concern highlighted by a 2001 clinical trial of paroxentine. Paroxentine was intended to treat depression in teenagers and the original clinical trials declared paroxentine an effective medication for depression. However, it eventually came to light that the supposedly safe and effective antidepressant actually increased the risk of suicide in teenagers. The researchers deceived regulators and the public by selectively reporting clinical trial data. Deploying Irving and Holden’s blockchain system, the paroxentine researchers would have been unable to deceive regulators and the public as anyone could have attempted to verify the accuracy of the reported data. This new system has the advantage of bringing transparency to clinical trials. If researchers attempt to manipulate trial protocols or selectively report trial data, regulators and the public at large will be able to easily spot this tampering and discredit the trial’s conclusions.

    This new system also has the ability to prevent outcome switching. Outcome switching occurs when the researchers shift the attention of the clinical trial to fit the results. Instead of reporting a clinical trial as a failure, researchers can use “outcome switching” to tailor the results to a new desired outcome. Deploying Irving and Holden’s blockchain system, researchers will no longer be able to use outcome switching to deliver positive results to regulators and drug companies.

    Studies have also found that many clinical trials fail to disclose their clinical trial protocols altogether until the trial is completed. This behavior raises questions as to what the actual protocol was and whether the protocol was tailored to fit the results after the fact, a severe form of outcome switching. With a blockchain platform, researchers would have to uploaded the protocol to the blockchain before clinical trials commenced. Researchers would still have the option to keep the protocol hidden until the trial was completed, which could be useful in commercially sensitive trials. However, because the protocol must be uploaded to the blockchain prior to commencement of the trial, there is no possibility that the researchers could alter the protocol mid-trial and hide that change from regulators or the public.

  • by 
  • HIPAA, the Health Insurance Portability and Accountability Act, which was enacted in 1996, includes several rules and standards which aim to protect individuals’ medical records and personal health information through the creation of national standards and policies. Title II of HIPAA includes the National Provider Identifier Standard, the Transactions and Code Sets Standards, the HIPAA Privacy Rule, the HIPAA Security Rule, and the HIPAA Enforcement Rule. The National Identifier Standard requires organizations to obtain a unique 10-digit national provider number (NPI). The Transactions and Code Sets Standards requires organizations to handle insurance claims through a standardized electronic interchange. The Privacy Rule, officially known as the Standards for Privacy of Individually Identifiable Health Information, creates protections for personal health information. The Security Rule establishes data security requirements while the Enforcement Rule outlines violation investigation procedures.

    In 2013, the HIPAA Omnibus Rule was adopted. This rule strengthened privacy and security protections, established more objective standards to assess an organization’s liability after a breach, extended regulations to cover business associates, modified limitations on what information can be used for marketing purposes, as well as increasing penalties for violations. The changes also included a prohibition on the sale of an individuals’ medical and health data without their consent. Business associates under HIPAA means “any organization or person working in association with or providing services to a covered entity who handles or discloses Personal Health Information or Personal Health Records.”

    Protected Health Information under HIPAA includes individually identifiable health information that relates to past, present, or future physical or mental health, health care services, or payment details for past, present, or future health care services. Employment records are not included.

    HIPAA regulations apply only to unsecured protected health information. Unsecured information is protected health information that been rendered “unusable, unreadable, or indecipherable to unauthorized persons.” If the breach only accesses secured protected health information, covered entities and business associates are not required to provide notification under HIPAA.

    HIPAA’s Breach Notification Rule applies to covered entities which includes health care professionals and health care organizations. Business associates of these entities are also required to comply with HIPAA and the Breach Notification Rule. This rule requires that notification be provided to impacted individuals in the event of a data breach. Business associates are not responsible for notifying individuals, however, business associates are required to notify the related covered entity of any breach. The Federal Trade Commission applies similar notification requirements to vendors of personal health records and any third party service providers.

    Under HIPAA, a breach is defined as “an impermissible use or disclosure under the Privacy Rule that compromises the security or privacy of the protected health information.” Any impermissible use or disclosure is assumed to be a breach, unless the entity shows there is a low risk the information has been compromised. This determination is based on four factors, (i) nature and extent of the health information involved, (ii) the unauthorized person who used the protected information or to whom the disclosure was made, (iii) whether the information was actually acquired or viewed, and (iv) the extent to which the risk of the protected information has been mitigated. Unless an entity demonstrates there is a low risk to protected health information has been comprised, the entity is required to provide notifications to impacted individuals under HIPAA.

    There are three exceptions under the Breach Notification Rule that allow an entity to avoid having to provide notifications of a breach. The first is if the information was unintentionally accessed or used by an entity’s employee or agent. Under the first exception, this unintentional access or use must have been done in good faith and within the scope of authority. The second exception involves the inadvertent disclosure of information from one party or entity who was authorized to access the information to another authorized party or entity. Under this exception, notification is not required, however, further use of the information is prohibited in this situation. The third exception covers the disclosure of information by a covered entity or business associate to an unauthorized party when the covered entity or business associate had a good faith belief that the unauthorized party had no ability to retain the information. If a breach fits any of the three exceptions, the covered entity or business associate is not required to provide notification under HIPAA.

    In the event a covered entity is required to provide notification under HIPAA, the covered entity must provide notice to the impacted individuals and the United States Department of Health and Human Services Secretary. Notices to individuals should be provided through first-class mail. Email is an acceptable method of notification if the individual has agreed to such method. Notifications to individuals must be made within 60 days of discovery of the breach. Notifications should include a description of the breach, information involved, any recommendations that the individual should do to help mitigate the breach, and what the cover entity is doing to investigate and mitigate the breach, including preventative measures the covered entity is establishing to avoid future breaches. There are additional requirements if a covered entity does not have up to date contacted information for more than ten impacted individuals. Notifications to the Secretary should be made using the electronic breach report form on the HHS website.

    In limited situations, a covered entity may be required to provide notification to media outlets as well. Covered entities are required to provide notice to prominent media outlets in a state when a breach impacted more than 500 residents of a single state. Required notification to media outlets must be made within 60 days.

    In the event that the breach occurs at or by a business associate, the business associate must notify the covered entity within 60 days of discovering the breach. If possible, the business associate should provide information on impacted individuals and other relevant information.

    HIPAA permits covered entities to disclose protected information without consent in several situations. Information may be disclosed to the patient themselves. Information may be disclosed and used internally by the covered entity for payment, health care operations, and treatment. Information may be disclosed or used in any situation where the individual has consented or would have consented (when the individual is incapacitated or under an emergency situation). Incidental uses or disclosures are not required to be eliminated. Disclosure for public interest reasons are permitted including compliance with relevant laws, health oversight, law enforcement, judicial proceedings, research, and other public interest reasons. Information is also permitted to be included in data sets as long as certain specified identifiers are removed. Under any of these situations, a covered entity must make reasonable efforts to use and disclose the minimum amount of necessary information.

  • by 
  • As more technology companies shift to offering their products and services via cloud computing, questions often arise regarding the differences between Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”).

    Software as a Service (SaaS)

    SaaS is a way to deliver applications over the internet, and these applications are managed by a provider. Instead of installing and maintaining software on a local server or hard drive, the end user usually has the ability to access the software via a web browser from anywhere. A provider has already built the software and has it hosted on a platform. Common components of SaaS include: web access to commercial software, the software is managed from a central location, end users are not generally required to handle software upgrades and patches, and APIs allow for integration between different pieces of software.

    There are plenty of benefits of SaaS. SaaS works well for scenarios that involve significant or frequent interaction between an end user company and its customers (promotional marketing email blasts, for example), as well as applications that have a significant need for web or mobile access. An end user does not have to be especially tech savvy as the SaaS provider handles all technical aspects of the service. SaaS is sometime viewed as the most convenient cloud-based type of service in terms of maintenance since the software is usually entirely managed by the provider.

    SaaS is not without its drawbacks and limitations. If an application requires extremely fast processing of real time data, SaaS is not the best cloud-based vehicle. Additionally, some laws and regulations prohibit certain types of data from being hosted externally.

    Arguably the most well-known type of cloud computing, SaaS offers simple and user friendly solutions for companies of all sizes. Well-known products that utilize SaaS are Gmail and Google Docs, as well as Microsoft CRM.

    Platform as a Service (PaaS)

    PaaS is a computing platform that allows for the quick and easy creation of web applications. PaaS is similar to SaaS, except rather than delivering software, PaaS is the delivery of a platform for the creation of software over the web. PaaS makes development, testing, and deployment of applications quick, simple, and cost-effective. Common characteristics of PaaS include: services to develop, test, deploy, host, and maintain applications within the same integrated development environment; web based user interface creation tools to create and modify different scenarios; built-in scalability of software; tools to handle billing and subscription management; and support for development team collaboration.

    PaaS is great for a number of scenarios. When multiple developers are working on a project, or there is a need for external parties to interact with the development process, PaaS can be ideal. If developers want to automate their testing and deployment services, PaaS is also probably the best route. PaaS is popular among developers since PaaS allows them to focus on the development of applications or scripts, and there is no need to worry about server management or traffic load.

    However, there are circumstances where PaaS may not be the best solution. PaaS would not be ideal if proprietary languages or approaches would impact the development process. If the software needs to be highly portable in terms of where it is hosted, or if software performance requires customization of the underlying hardware and software then PaaS may not be able to meet all of the target needs. Well know products that utilize PaaS are Microsoft Azure, Google App hosting, and Heroku.

    Infrastructure as a Service (IaaS)

    IaaS is a self-service model for accessing and managing remote datacenter infrastructures. Essentially, IaaS is a way of delivering cloud computing infrastructure, such as servers, storage, network, and operating systems, as an on demand service. Users can purchase IaaS based on their consumption, similar to paying utility bills. Unlike with SaaS and PaaS, IaaS users are responsible for managing applications, data, runtime, and middleware. Core traits of IaaS include: dynamic scaling, resources that are distributed as a service, and a variable pricing model.

    The greatest strength of IaaS is the flexibility it offers. If demand on the infrastructure of a company fluctuates significantly or if a company lacks capital to invest in hardware, IaaS is the way to go. IaaS is great if an organization is growing rapidly, it would be problematic to scale hardware and/or a company is rolling out an experimental project. IaaS is most popular among highly skilled developers and researchers who require custom configuration. IaaS allows the user the highest degree of control, and the ability to customize the product to the highest degree.

    If regulatory compliance makes the outsourcing or offshoring of data storage difficult, then IaaS would not be the ideal solution. IaaS is also not ideal if extremely high levels of performance are required. Well-known examples of IaaS vendors are Microsoft, Amazon, and Openstack.

  • by 
  • On July 25th, a Miami-Dade County judge delivered what is believed to be the first verdict in the country involving bitcoin in a money-laundering case. The defendant, Michell Espinoza, was arrested in 2014 and charged with illegally transmitting and laundering bitcoins. However, the charges were dismissed following the court’s determination that bitcoin cannot be considered “money” under the relevant Florida statutes. The interpretations of “financial transaction” and “monetary instruments” played a pivotal role in the outcome. Judge Pooler ultimately determined that bitcoin was not “tangible wealth” and therefore didn’t fall under the definitions of a “financial transaction” and “monetary instruments.”

    However, Judge Pooler did leave the door open to bitcoin being deemed money in the future. She expressed that the state legislature should move to address the question. She also stated “Bitcoin has a long way to go before it the equivalent of money” as it lacks backing from any government or banking institution. This point is echoed by many others who have expressed doubt whether bitcoin can be classified as a currency as it currently lacks a reliable medium of exchange and its high volatility makes it an unreliable method for storing value.

    However, the ruling in Florida v. Espinoza that bitcoin isn’t money may be at odds with other recent decisions. The magistrate in SEC v. Shaver recently found the defendant guilty of operating a Ponzi scheme, dismissing the defendant’s arguments that bitcoin was not money and therefore not the subject of SEC regulations. Many other cases involve the use of bitcoin for illegal or illicit purposes. However, many of those decisions hinged on whether to underlying activity was illegal, not whether or not bitcoin was money. Ultimately, these decisions represent cases of first impression in many jurisdictions, inconsistencies across jurisdictions are likely to continue until legislatures address the issue.

    Adding to the confusion, different organizations consider bitcoin to be different things. The IRS considers bitcoin to be property, not a currency. The Commodity Futures Trading Commission believes that bitcoin is a commodity, not a currency. It remains to be seen if states will adopt new legislation to address bitcoin’s status as money. However, until legislatures and courts take further action, bitcoin is likely to remain “property” instead of “money” in the eyes of the IRS and many courts.

  • by 
  • What is a smart contract and how do they work? 

    Nick Szabo, who is credited with laying the ground work for the cryptocurrency bitcoin, coined the term “Smart Contract” in a paper published in 1997, however, the technology did not begin to take off until recently. Smart contracts are computer-programed contracts capable of self execution. Unlike traditional paper contracts, smart contracts are capable of interpreting inputs and conditions and automatically executing related contract terms.

    Smart contracts are programed just like other computer programs operating on a “if-then” organization. When a condition is satisfied the smart contract program will trigger the response stipulated in the relevant contract clause. Originally, smart contracts lacked a real world practical application because banks and other financial institutions still required manual approval before funds would be transferred.

    Smart contracts have become increasingly practical as bitcoin has become more mainstream. Because bitcoin does not rely on manual approvals from financial institutions, smart contracts have now opened the door to computer-programed initiated payments. Some of these platforms have become known as “Bitcoin 2.0 platforms” because they involve smart contract computer programs built on top of underlying bitcoin programs. These two programs work together to initiate and execute transfers of funds around the world instantly.

    There are several smart contract platforms available including Codius, BitHalo, BlackHalo, BurstCoin, Ethereum, and Counterparty. Codius is being designed by Ripple Labs to work with Ripple Lab’s own cryptocurrency called Ripple. Ripple Labs is also aiming for Codius to work across a wide variety of other cryptocurrencies. Ethereum is being designed and deployed as a completely independent cryptocurrency system which would operate as a direct replacement to other cryptocurrencies such as bitcoin.

    Practical applications and benefits 

    Similar to cryptocurrencies, smart contracts are grounded in a decentralized system that operate without the intervention or involvement of a third party. A system of smart contracts could help to drive down costs in a variety of industries. Tasks such as home mortgages could be significantly simplified. Wills and trust funds would no longer need teams of lawyers and bankers to execute.

    The integration of smart contracts with Internet of Things (“IoT”) products and services has the potential to present numeous opportunities. These applications that combine smart contracts and IoT devices are often referred to as “smart property tokens.” Ultimately, these tokens could represent ownership or rights in property such as cars, houses, cellphones, or a rented apartment.

    An example of this combination is a smart car that is rented out. Upon receipt of payment, the smart contract program could automatically unlock the doors of the vehicle. The smart contract program could also automatically disable the vehicle upon termination of the rental period. With IoT devices, unlocking the car would be possible through a smartphone, and no physical keys, contracts, or money would be required. The end result is that integration of smart contracts and IoT devices could eliminate significant transaction costs and reduce inefficiencies.

    Potential concerns

    Smart contracts present a number of issues. Foremost, security is a major source of concern. In order to ensure long-term, widespread use, smart contracts will have to provide a sufficient level of security to prevent the possibility of fraud, hacks, and data breaches. There is also concern over how courts may approach smart contracts, conditional inputs, and faulty execution.

    Finally, one of the biggest hurdles to widespread adoption is the development of a simple user interface. If non-tech savvy users are able to quickly and easily create and manage their smart contracts then the technology has tremendous potential. However, if the user interface is difficult to interact with or requires a certain level of technical know-how, adoption of smart contracts could languish.

  • by 
  • Internet of Things (“IoT”) devices are becoming more and more relevant as everything from water bottles to light bulbs to watches to laundry machines have become computerized and connected. IoT entrepreneurs and hobbyists alike often turn to Raspberry Pi and Arduino in working on initial prototypes.

    Raspberry Pi and Arduino are designed to perform different tasks. While the two platforms do accomplish some similar tasks, the differences between the two platforms are significant. Both platforms are available for relatively cheap prices.

    Ultimately, selecting the better option comes down to the type of project. However, many projects may benefit from using both platforms conjunctively. It is fairly common that more complex products or designs will incorporate both platforms working together, with Arduino responsible for things like motor driving, senor reading, and LED driving while Raspberry Pi handles calculations and more complex tasks.

    Arduino is an extremely easy to use microcontroller. Microcontrollers are effective at repetitively running one program or task. Raspberry Pi on the other hand is a computer which usually runs a version of the Linux operating system. Raspberry Pi is designed for heavy calculations and multitasking.

     

    Arduino Advantages and Disadvantages

    Arduino, while not as powerful as Raspberry Pi, has plenty of very useful applications. Because of its simplicity, Arduino is very easy to use with hardware based products as its main purpose is to interface with sensors and devices. Arduino is capable of being powered by a battery pack paired with a clip on shield to control the power supply. There is no risk of damage to the Arduino unit if the power supply is interrupted.

    Arduino is also much easier to use and connect to analog sensors, as compared to Raspberry Pi. This ease of use is made possible by the wide variety of clip-on shields designed for the Arduino unit which give it added abilities.

    When it comes to networking, Arduino is more limited as compared to Raspberry Pi. In order to connect to a network, an Arduino unit requires an additional clip-on shield that includes an Ethernet port. Additional wiring and coding is required to connect the Arduino unit to a network.

     

    Raspberry Advantages and Disadvantages

    Raspberry Pi boasts a more powerful platform than the Arduino, however, this additional power does come at the cost of ease of use for some projects. Many Raspberry Pi models come equipped with HDMI output, SD card ports, dedicated processor, and dedicated memory. Raspberry Pi is a powerful tool that is easily deployed for tasks such as media streaming or video game emulations. Many Raspberry Pi models also come fully equipped with independent network connectivity including SSH and FTP file transfer abilities. Wi-Fi connection is also possible by simply connecting a USB Wi-Fi dongle and installing a driver.

    Because Raspberry Pi is a general purpose computer running a version of Linux, Raspberry Pi modules are much more capable of running multiple programs as compared to the Arduino platform. This capability gives Raspberry Pi the leg up on Arduino when it comes to projects involving complex tasks and intense calculations.

    The SD card ports allow a designer to quickly swap out different versions of the operating system or easily update software.

    Unlike the Arduino models, Raspberry Pi requires a constant 5V power feed, and this set up is often more complex than the Arduino set up. Additionally, disconnecting the power from the Raspberry Pi module without properly powering down the module could result in data corruption.