Connect with us


Secured no. 1 | Ethereum Foundation Blog



Secured no. 1 | Ethereum Foundation Blog

Earlier this year, we launched a bug bounty program focused on finding issues in the beacon chain specification, and/or in client implementations (Lighthouse, Nimbus, Teku, Prysm etc…). The results (and vulnerability reports) have been enlightening as have the lessons learned while patching potential issues.

In this new series, we aim to explore and share some of the insight we’ve gained from security work to date and as we move forward.

This first post will analyze some of the submissions specifically targeting BLS primitives.

Disclaimer: All bugs mentioned in this post have been already fixed.

BLS is everywhere

A few years ago, Diego F. Aranha gave a talk at the 21st Workshop on Elliptic Curve Cryptography with the title: Pairings are not dead, just resting. How prophetic.

Here we are in 2021, and pairings are one of the primary actors behind many of the cryptographic primitives used in the blockchain space (and beyond): BLS aggregate signatures, ZK-SNARKS systems, etc.

Development and standardization work related to BLS signatures has been an ongoing project for EF researchers for a while now, driven in-part by Justin Drake and summarized in a recent post of his on reddit.

The latest and greatest

In the meantime, there have been plenty of updates. BLS12-381 is now universally recognized as the pairing curve to be used given our present knowledge.

Three different IRTF drafts are currently under development:

  1. Pairing-Friendly Curves
  2. BLS signatures
  3. Hashing to Elliptic Curves

Moreover, the beacon chain specification has matured and is already partially deployed. As mentioned above, BLS signatures are an important piece of the puzzle behind proof-of-stake (PoS) and the beacon chain.

Recent lessons learned

After collecting submissions targeting the BLS primitives used in the consensus-layer, we’re able to split reported bugs into three areas:

  • IRTF draft oversights
  • Implementation mistakes
  • IRTF draft implementation violations

Let’s zoom into each section.

IRTF draft oversights

One of the reporters, (Nguyen Thoi Minh Quan), found discrepancies in the IRTF draft, and published two white papers with findings:

While the specific inconsistencies are still subject for debate, he found some interesting implementation issues while conducting his research.

Implementation mistakes

Guido Vranken was able to uncover several “little” issues in BLST using differential fuzzing. See examples of those below:

He topped this off with discovery of a moderate vulnerability affecting the BLST’s blst_fp_eucl_inverse function.

IRTF draft implementation violations

A third category of bug was related to IRTF draft implementation violations. The first one affected the Prysm client.

In order to describe this we need first to provide a bit of background. The BLS signatures IRTF draft includes 3 schemes:

  1. Basic scheme
  2. Message augmentation
  3. Proof of possession

The Prysm client doesn’t make any distinction between the 3 in its API, which is unique among implementations (e.g. py_ecc). One peculiarity about the basic scheme is quoting verbatim: ‘This function first ensures that all messages are distinct’ . This was not ensured in the AggregateVerify function. Prysm fixed this discrepancy by deprecating the usage of AggregateVerify (which is not used anywhere in the beacon chain specification).

A second issue impacted py_ecc. In this case, the serialization process described in the ZCash BLS12-381 specification that stores integers are always within the range of [0, p - 1]. The py_ecc implementation did this check for the G2 group of BLS12-381 only for the real part but did not perform the modulus operation for the imaginary part. The issue was fixed with the following pull request: Insufficient Validation on decompress_G2 Deserialization in py_ecc.

Wrapping up

Today, we took a look at the BLS related reports we have received as part of our bug bounty program, but this is definitely not the end of the story for security work or for adventures related to BLS.

We strongly encourage you to help ensure the consensus-layer continues to grow safer over time. With that, we look forward hearing from you and encourage you to DIG! If you think you’ve found a security vulnerability or any bug related to the beacon chain or related clients, submit a bug report! 💜🦄

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Ropsten Shutdown Announcement | Ethereum Foundation Blog



Ropsten Shutdown Announcement | Ethereum Foundation Blog

As previously announced, the Ropsten network has been deprecated and will be shut down in the coming weeks. Over the past few months, infrastructure providers have gradually stopped supporting the network and validator participation rates have been steadily declining.

The vast majority of remaining validator nodes will be shut down during the December 15-31, 2022 period. After this, Ropsten will no longer be supported by client, testing or infrastructure teams.

As a reminder, the next testnet to be sunset is Rinkeby. The network will be live until mid-2023 to give users and application developers the chance to migrate to either Goerli or Sepolia. That said, Rinkeby does not support The Merge, nor will it support future network upgrades. It is no longer a feature-equivalent replica of the Ethereum mainnet.

The Merge and legacy testnet deprecations have provided an opportunity for the Ethereum community to rethink its broader approach to test networks. Proposals around purpose-specific networks for stakers vs. developers, end-of-life norms for testnets and more are being discussed on Ethereum Magicians and in community calls. If you have strong opinions about the future of Ethereum testnets, now is the time to voice them!

Cover image by Micael Widell

Source link

Continue Reading


Bootstrapping A Decentralized Autonomous Corporation: Part I



Bootstrapping A Decentralized Autonomous Corporation: Part I

Corporations, US presidential candidate Mitt Romney reminds us, are people. Whether or not you agree with the conclusions that his partisans draw from that claim, the statement certainly carries a large amount of truth. What is a corporation, after all, but a certain group of people working together under a set of specific rules? When a corporation owns property, what that really means is that there is a legal contract stating that the property can only be used for certain purposes under the control of those people who are currently its board of directors – a designation itself modifiable by a particular set of shareholder. If a corporation does something, it’s because its board of directors has agreed that it should be done. If a corporation hires employees, it means that the employees are agreeing to provide services to the corporation’s customers under a particular set of rules, particularly involving payment. When a corporation has limited liability, it means that specific people have been granted extra privileges to act with reduced fear of legal prosecution by the government – a group of people with more rights than ordinary people acting alone, but ultimately people nonetheless. In any case, it’s nothing more than people and contracts all the way down.

However, here a very interesting question arises: do we really need the people? On the one hand, the answer is yes: although in some post-Singularity future machines will be able to survive all on their own, for the forseeable future some kind of human action will simply be necessary to interact with the physical world. On the other hand, however, over the past two hundred years the answer has been increasingly no. The industrial revolution allowed us, for the first time, to start replacing human labor with machines on a large scale, and now we have advanced digitized factories and robotic arms that produce complex goods like automobiles all on their own. But this is only automating the bottom; removing the need for rank and file manual laborers, and replacing them with a smaller number of professionals to maintain the robots, while the management of the company remains untouched. The question is, can we approach the problem from the other direction: even if we still need human beings to perform certain specialized tasks, can we remove the management from the equation instead?

Most companies have some kind of mission statement; often it’s about making money for shareholders; at other times, it includes some moral imperative to do with the particular product that they are creating, and other goals like helping communities sometimes enter the mix, at least in theory. Right now, that mission statement exists only insofar as the board of directors, and ultimately the shareholders, interpret it. But what if, with the power of modern information technology, we can encode the mission statement into code; that is, create an inviolable contract that generates revenue, pays people to perform some function, and finds hardware for itself to run on, all without any need for top-down human direction?

As Let’s Talk Bitcoin’s Daniel Larmier pointed out in his own exploration on this concept, in a sense Bitcoin itself can be thought of as a very early prototype of exactly such a thing. Bitcoin has 21 million shares, and these shares are owned by what can be considered Bitcoin’s shareholders. It has employees, and it has a protocol for paying them: 25 BTC to one random member of the workforce roughly every ten minutes. It even has its own marketing department, to a large extent made up of the shareholders themselves. However, it is also very limited. It knows almost nothing about the world except for the current time, it has no way of changing any aspect of its function aside from the difficulty, and it does not actually do anything per se; it simply exists, and leaves it up to the world to recognize it. The question is: can we do better?


The first challenge is obvious: how would such a corporation actually make any decisions? It’s easy to write code that, at least given predictable environments, takes a given input and calculates a desired action to take. But who is going to run the code? If the code simply exists as a computer program on some particular machine, what is stopping the owner of that machine from shutting the whole thing down, or even modifying its code to make it send all of its money to himself? To this problem, there is only one effective answer: distributed computing.

However, the kind of distributed computing that we are looking for here is not the same as the distributed computing in projects like SETI@home and Folding@home; in those cases, there is still a central server collecting data from the distributed nodes and sending out requests. Here, rather, we need the kind of distributed computing that we see in Bitcoin: a set of rules that decentrally self-validates its own computation. In Bitcoin, this is accomplished by a simple majority vote: if you are not helping to compute the blockchain with the majority network power, your blocks will get discarded and you will get no block reward. The theory is that no single attacker will have enough computer power to subvert this mechanism, so the only viable strategy is essentially to “go with the flow” and act honestly to help support the network and receive one’s block reward. So can we simply apply this mechanism to decentralized computation? That is, can we simply ask every computer in the network to evaluate a program, and then reward only those whose answer matches the majority vote? The answer is, unfortunately, no. Bitcoin is a special case because Bitcoin is simple: it is just a currency, carrying no property or private data of its own. A virtual corporation, on the other hand, would likely need to store the private key to its Bitcoin wallet – a piece of data which should be available in its entirety to no one, not to everyone in the way that Bitcoin transactions are. But, of course, the private key must still be usable. Thus, what we need is some system of signing transactions, and even generating Bitcoin addresses, that can be computed in a decentralized way. Fortunately, Bitcoin allows us to do exactly that.

The first solution that might immediately come to mind is multisignature addresses; given a set of a thousand computers that can be relied upon to probably continue supporting the corporations, have each of them create a private key, and generate a 501-of-1000 multisignature address between them. To spend the funds, simply construct a transaction with signatures from any 501 nodes and broadcast it into the blockchain. The problem here is obvious: the transaction would be too large. Each signature makes up about seventy bytes, so 501 of them would make a 35 KB transaction – which is very difficult to get accepted into the network as bitcoind by default refuses transactions with any script above 10,000 bytes. Second, the solution is specific to Bitcoin; if the corporation wants to store private data for non-financial purposes, multisignature scripts are useless. Multisignature addresses work because there is a Bitcoin network evaluating them, and placing transactions into the blockchain depending on whether or not the evaluation succeeds. In the case of private data, an analogous solution would essentially require some decentralized authority to store the data and give it out only if a request has 501 out of 1000 signatures as needed – putting us right back where we started.

However, there is still hope in another solution; the general name given to this by cryptographers is “secure multiparty computation”. In secure multiparty computation, the inputs to a program (or, more precisely, the inputs to a simulated “circuit”, as secure multiparty computation cannot handle “if” statements and conditional looping) are split up using an algorithm calledShamir’s Secret Sharing, and a piece of the information is given to each participant. Shamir’s Secret Sharing can be used to split up any data into N pieces such that any K of them, but no K-1 of them, are sufficient to recover the original data – you choose what K and N are when running the algorithm. 2-of-3, 5-of-10 and 501-of-1000 are all possible. A circuit can then be evaluated on the pieces of data in a decentralized way, such that at the end of the computation everyone has a piece of the result of the computation, but at no point during the computation does any single individual get even the slightest glimpse of what is going on. Finally, the pieces are put together to reveal the result. The runtime of the algorithm is O(n3), meaning that the number of computational steps that it takes to evaluate a computation is roughly proportional to the cube of the number of participants; at 10 nodes, 1000 computational steps, and at 1000 nodes 1 billion steps. A simple billion-step loop in C++ takes about twenty seconds on my own laptop, and servers can do it in a fraction of a second, so 1000 nodes is currently roughly at the limit of computational practicality.

As it turns out, secure multiparty computation can be used to generate Bitcoin addresses and sign transactions. For address generation, the protocol is simple:

  1. Everyone generates a random number as a private key.
  2. Everyone calculates the public key corresponding to the private key.
  3. Everyone reveals their public key, and uses Shamir’s Secret Sharing algorithm to calculate a public key that can be reconstructed from any 501 of the thousand public keys revealed.
  4. An address is generated from that public key.

Because public keys can be added, subtracted , multiplied and even divided by integers, surprisingly this algorithm works exactly as you would expect. If everyone were to then put together a 501-of-1000 private key in the same way, that private key would be able to spend the money sent to the address generated by applying the 501-of-1000 algorithm to the corresponding public keys. This works because Shamir’s Secret Sharing is really just an algebraic formula – that is to say, it uses only addition, subtraction, multiplication and division, and one can compute this formula “over” public keys just as easily as with addresses; as a result, it doesn’t matter if the private key to public key conversion is done before the algebra or after it. Signing transactions can be done in a similar way, although the process is somewhat more complicated.

The beauty of secure multiparty computation is that it extends beyond just Bitcoin; it can just as easily be used to run the artificial intelligence algorithm that the corporation relies on to operate. So-called “machine learning”, the common name for a set of algorithms that detect patterns in real-world data and allow computers to model it without human intervention and are employed heavily in fields like spam filters and self-driving cars, is also “just algebra”, and can be implemented in secure multiparty computation as well. Really, any computation can, if that computation is broken down into a circuit on the input’s individual bits. There is naturally some limit to the complexity that is possible; converting complex algorithms into circuits often introduces additional complexity, and, as described above, Shamir’s Secret Sharing can get expensive all by itself. Thus, it should only really be used to implement the “core” of the algorithm; more complex high-level thinking tasks are best resolved by outside contractors.

Excited about this topic? Look forward to parts 2, 3 and 4: how decentralized corporations can interact with the outside world, how some simple secure multiparty computation circuits work on a mathematical level, and two examples of how these decentralized corporations can make a difference in the real world.

See also:

Source link

Continue Reading


Bootstrapping An Autonomous Decentralized Corporation, Part 2: Interacting With the World



More Thoughts on Scripting and Future-Compatibility

In the first part of this series, we talked about how the internet allows us to create decentralized corporations, automatons that exist entirely as decentralized networks over the internet, carrying out the computations that keep them “alive” over thousands of servers. As it turns out, these networks can even maintain a Bitcoin balance, and send and receive transactions. These two capacities: the capacity to think, and the capacity to maintain capital, are in theory all that an economic agent needs to survive in the marketplace, provided that its thoughts and capital allow it to create sellable value fast enough to keep up with its own resource demands. In practice, however, one major challenge still remains: how to actually interact with the world around them.

Getting Data

The first of the two major challenges in this regard is that of input – how can a decentralized corporation learn any facts about the real world? It is certainly possible for a decentralized corporation to exist without facts, at least in theory; a computing network might have the Zermelo-Fraenkel set theory axioms embedded into it right from the start and then embark upon an infinite loop proving all possible mathematical theorems – although in practice even such a system would need to somehow know what kinds of theorems the world finds interesting; otherwise, we may simply learn that a+b=b+a, a+b+c=c+b+a,a+b+c+d=d+c+b+a and so on. On the other hand, a corporation that has some data about what people want, and what resources are available to obtain it, would be much more useful to the world at large.

Here we must make a distinction between two kinds of data: self-verifying data, and non-self-verifying data. Self-verifying data is data which, once computed on in a certain way, in some sense “proves” its own validity. For example, if a given decentralized corporation is looking for prime numbers containing the sequence ’123456789′, then one can simply feed in ’12345678909631′ and the corporation can computationally verify that the number is indeed prime. The current temperature in Berlin, on the other hand, is not self-verifying at all; it could be 11′C, but it could also just as easily be 17′C, or even 231′C; without outside data, all three values seem equally legitimate.

Bitcoin is an interesting case to look at. In the Bitcoin system, transactions are partially self-verifying. The concept of a “correctly signed” transaction is entirely self-verifying; if the transaction’s signature passes the elliptic curve digital signature verification algorithm, then the transaction is valid. In theory, you might claim that the transaction’s signature correctness depends on the public key in the previous transaction; however, this actually does not at all detract from the self-verification property – the transaction submitter can always be required to submit the previous transaction as well. However, there is something that is not self-verifying: time. A transaction cannot spend money before that money was received and, even more crucially, a transaction cannot spend money that has already been spent. Given two transactions spending the same money, either one could have theoretically come first; there is no way to self-verify the validity of one history over the other.

Bitcoin essentially solves the time problem with a computational democracy. If the majority of the network agrees that events happened in a certain order, then that order is taken as truth, and the incentive is for every participant in this democratic process to participate honestly; if any participant does not, then unless the rogue participant has more computing power than the rest of the network put together his own version of the history will always be a minority opinion, and thus rejected, depriving the miscreant of their block revenue.

In a more general case, the fundamental idea that we can gleam from the blockchain concept is this: we can use some kind of resource-democracy mechanism to vote on the correct value of some fact, and ensure that people are incentivized to provide accurate estimates by depriving everyone whose report does not match the “mainstream view” of the monetary reward. The question is, can this same concept be applied elsewhere as well? One improvement to Bitcoin that many would like to see, for example, is a form of price stabilization; if Bitcoin could track its own price in terms of other currencies or commodities, for example, the algorithm could release more bitcoins if the price is high and fewer if the price is low – naturally stabilizing the price and reducing the massive spikes that the current system experiences. However, so far, no one has yet figured out a practical way of accomplishing such a thing. But why not?

The answer is one of precision. It is certainly possible to design such a protocol in theory: miners can put their own view of what the Bitcoin price is in each block, and an algorithm using that data could fetch it by taking the median of the last thousand blocks. Miners that are not within some margin of the median would be penalized. However, the problem is that the miners have every incentive, and substantial wiggle room, to commit fraud. The argument is this: suppose that the actual Bitcoin price is 114 USD, and you, being a miner with some substantial percentage of network power (eg. 5%), know that there is a 99.99% chance that 113 to 115 USD will be inside the safe margin, so if you report a number within that range your blocks will not get rejected. What should you say that the Bitcoin price is? The answer is, something like 115 USD. The reason is that if you put your estimate higher, the median that the network provides might end up being 114.05 BTC instead of 114 BTC, and the Bitcoin network will use this information to print more money – increasing your own future revenue in the process at the expense of existing savers. Once everyone does this, even honest miners will feel the need to adjust their estimates upwards to protect their own blocks from being rejected for having price reports that are too low. At that point, the cycle repeats: the price is 114 USD, you are 99.99% sure that 114 to 116 USD will be within the safe margin, so you put down the answer of 116 USD. One cycle after that, 117 USD, then 118 USD, and before you know it the entire network collapses in a fit of hyperinflation.

The above problem arose specifically from two facts: first, there is a range of acceptable possibilities with regard to what the price is and, second, the voters have an incentive to nudge the answer in one direction. If, instead of proof of work, proof of stake was used (ie. one bitcoin = one vote instead of one clock cycle = one vote), then the opposite problem would emerge: everyone would bid the price down since stakeholders do not want any new bitcoins to be printed at all. Can proof of work and proof of stake perhaps be combined to somehow solve the problem? Maybe, maybe not.

There is also another potential way to resolve this problem, at least for applications that are higher-level than the underying currency: look not at reported market prices, but at actual market prices. Assume, for example, that there already exists a system like Ripple (or perhaps something based on colored coins) that includes a decentralized exchange between various cryptographic assets. Some might be contracts representing assets like gold or US dollars, others company shares, others smart property and there would obviously also be trust-free cryptocurrency similar to Bitcoin as well. Thus, in order to defraud the system, malicious participants would not simply need to report prices that are slightly incorrect in their favored direction, but would need to push the actual prices of these goods as well – essentially, a LIBOR-style price fixing conspiracy. And, as the experiences of the last few years have shown, LIBOR-style price fixing conspiracies are something that even human-controlled systems cannot necessarily overcome.

Furthermore, this fundamental weakness that makes it so difficult to capture accurate prices without a crypto-market is far from universal. In the case of prices, there is definitely much room for corruption – and the above does not even begin to describe the full extent of corruption possible. If we expect Bitcoin to last much longer than individual fiat currencies, for example, we might want the currency generation algorithm to be concerned with Bitcoin’s price in terms of commodities, and not individual currencies like the USD, leaving the question of exactly which commodities to use wide open to “interpretation”. However, in most other cases no such problems exist. If we want a decentralized database of weather in Berlin, for example, there is no serious incentive to fudge it in one direction or the other. Technically, if decentralized corporations started getting into crop insurance this would change somewhat, but even there the risk would be smaller, since there wowuld be two groups pulling in opposite directions (namely, farmers who want to pretend that there are droughts, and insurers who want to pretend that there are not). Thus, a decentralized weather network is, even with the technology of today, an entirely possible thing to create.

Acting On The World

With some kind of democratic voting protocol, we reasoned above, it’s possible for a decentralized corporation to learn facts about the world. However, is it also possible to do the opposite? Is it possible for a corporation to actually influence its environment in ways more substantial than just sitting there and waiting for people to assign value to its database entries as Bitcoin does? The answer is yes, and there are several ways to accomplish the goal. The first, and most obvious, is to use APIs. An API, or application programming interface, is an interface specifically designed to allow computer programs to interact with a particular website or other software program. For example, sending an HTTP GET request to sends an instruction to’s servers, which then give you back a file containing the latest transactions to and from the Bitcoin address 1AEZyM6pXy1gxiqVsRLFENJLhDjbCj4FJz in a computer-friendly format. Over the past ten years, as business has increasingly migrated onto the internet, the number of services that are accessible by API has been rapidly increasing. We have internet search, weather, online forums, stock trading, and more APIs are being created every year. With Bitcoin, we have one of the most critical pieces of all: an API for money.

However, there still remains one critical, and surprisingly mundane, problem: it is currently impossible to send an HTTP request in a decentralized way. The request must eventually be sent to the server all in one piece, and that means that it must be assembled in its entirety, somewhere. For requests whose only purpose is to retrieve public data, like the blockchain query described above, this is not a serious concern; the problem can be solved with a voting protocol. However, if the API requires a private API key to access, as all APIs that automate activities like purchasing resources necessarily do, having the private key appear in its entirety, in plaintext, anywhere but at the final recipient, immediately compromises the private key’s privacy. Requiring requests to be signed alleviates this problem; signatures, as we saw above, can be done in a decentralized way, and signed requests cannot be tampered with. However, this requires additional effort on the part of API developers to accomplish, and so far we are nowhere near adopting signed API requests as a standard.

Even with that issue solved, another issue still remains. Interacting with an API is no challenge for a computer program to do; however, how does the program learn about that API in the first place? How does it handle the API changing? What about the corporation running a particular API going down outright, and others coming in to take its place? What if the API is removed, and nothing exists to replace it? Finally, what if the decentralized corporation needs to change its own source code? These are problems that are much more difficult for computers to solve. To this, there is only one answer: rely on humans for support. Bitcoin heavily relies on humans to keep it alive; we saw in March 2013 how a blockchain fork required active intervention from the Bitcoin community to fix, and Bitcoin is one of the most stable decentralized computing protocols that can possibly be designed. Even if a 51% attack happens, a blockchain fork splits the network into three, and a DDoS takes down the five major mining pools all at the same time, once the smoke clears some blockchain is bound to come out ahead, the miners will organize around it, and the network will simply keep on going from there. More complex corporations are going to be much more fragile; if a money-holding network somehow leaks its private keys, the result is that it goes bankrupt.

But how can humans be used without trusting them too much? If the humans in question are only given highly specific tasks that can easily be measured, like building the fastest possible miner, then there is no issue. However, the tasks that humans will need to do are precisely those tasks that cannot so easily be measured; how do you figure out how much to reward someone for discovering a new API? Bitcoin solves the problem by simply removing the complexity by going up one layer of abstraction: Bitcoin’s shareholders benefit if the price goes up, so shareholders are encouraged to do things that increase the price. In fact, in the case of Bitcoin an entire quasi-religion has formed around supporting the protocol and helping it grow and gain wider adoption; it’s hard to imagine every corporation having anything close to such a fervent following.

Hostile Takeovers

Alongside the “future proofing” problem, there is also another issue that needs to be dealt with: that of “hostile takeovers”. This is the equivalent of a 51% attack in the case of Bitcoin, but the stakes are higher. A hostile takeover of a corporation handling money means that the attacker gains the ability to drain the corporation’s entire wallet. A hostile takeover of Decentralized Dropbox, Inc means that the attacker can read everyone’s files (although hopefully the files are encrypted, although in the case the attacker can still deny everyone their files). A hostile takeover of a decentralized web hosting company can lead to massive losses not just for those who have websites hosted, but also their customers, as the attacker gains the ability to modify web pages to also send off customers’ private data to the attacker’s own server as soon as each customer logs in. How might a hostile takeover be accomplished? In the case of the 501-out-of-1000 private key situation, the answer is simple: pretend to be a few thousand different servers at the same time, and join the corporation with all of them. By forwarding communications through millions of computers infected by a botnet, this is easy to accomplish without being detected. Then, once you have more than half of the servers in the network, you can immediately proceed to cash out.

Fortunately, the presence of Bitcoin has created a number of solutions, of which the proof of work used by Bitcoin itself is only one. Because Bitcoin is a perfect API for money, any kind of protocol involving monetary scarcity and incentives is now available for computer networks to use. Proof of stake, requiring each participating node to show proof that it controls, say, 100 BTC is one possible solution; if that is done, then implementing a hostile takeover would require more resources than all of the legitimate nodes committed together. The 100 BTC could even be moved to a multisignature address partially controlled by the network as a surety bond, both discouraging nodes from cheating and giving their owners a great incentive to act and even get together to keep the corporation alive.

Another alternative might simply be to allow the decentralized corporation to have shareholders, so that shareholders get some kind of special voting privileges, along with the right to a share of the profits, in exchange for investing; this too would encourage the shareholders to protect their investment. Making a more fine-grained evaluation of an individual human employee is likely impossible; the best solution is likely to simply use monetary incentives to direct people’s actions on a coarse level, and then let the community self-organize to make the fine-grained adjustments. The extent to which a corporation targets a community for investment and participation, rather than discrete individuals, is the choice of its original developers. On the one hand, targeting a community can allow your human support to work together to solve problems in large groups. On the other hand, keeping everyone separate prevents collusion, and in that way reduces the likelihood of a hostile takeover.

Thus, what we have seen here is that very significant challenges still remain before any kind of decentralized corporation can be viable. The problem will likely be solved in layers. First, with the advent of Bitcoin, a self-supporting layer of cryptographic money exists. Next, with Ripple and colored coins, we will see crypto-markets emerge, that can then be used to provide crypto-corporations with accurate price data. At the same time, we will see more and more crypto-friendly APIs emerge to serve decentralized systems’ needs. Such APIs will be necessary regardless of whether decentralized corporations will ever exist; we see today just how difficult cryptographic keys are to keep secure, so infrastructure suitable for multiparty signing will likely become a necessity. Large certificate signing authorities, for example, hold private keys that would result in hundreds of millions of dollars worth of security breaches if they were ever to fall into the wrong hands, and so these organizations often employ some form of multiparty signing already.

Finally, it will still take time for people to develop exactly how these decentralized corporations would work. Computer software is increasingly becoming the single most important building block of our modern world, but up until now search into the area has been focused on two areas: artificial intelligence, software working purely on its own, and software tools working under human beings. The question is: is there something in the middle? If there is, the idea of software directing humans, the decentralized corporation, is exactly that. Contrary to fears, this would not be an evil heartless robot imposing an iron fist on humanity; in fact, the tasks that the corporation will need to outsource are precisely those that require the most human freedom and creativity. Let’s see if it’s possible.

See also:

Supplementary reading: Jeff Garzik’s article on one practical example of what an autonomous corporation might be useful for

Source link

Continue Reading