Fetch.AI weekly newsletter #028 – Details of our groundbreaking smart contract technology

Hello from everyone at Fetch.AI,

After the excitement of recent months, this week the team have had their heads down delivering some of the many aspects of our technology.

One element of this came in the form of our latest paper concerning our Smart Ledger. We are proud to announce that we have achieved a breakthrough in smart contract technology that massively enhances the value of both the cryptocurrency ecosystem and the wider economy. The principal innovation is known as Synergetic Contracts, which can be used to harness the full power of a distributed compute network to solve coordination problems within a distributed ledger. For the cryptocurrency community, these contracts enable transactions that involve the exchange of different token types without needing an exchange. We will be sharing more details on how the community can get involved with developing this new technology in the very near future.

We’ve also released an introduction to our ledger Python API, showing you how to create a FET balance and send FET tokens over your own private testnet. Please take a look and let us know what you think.

Both documents require a degree of technical expertise and we recognise the importance of ensuring the whole community, and not just academics and developers, are able to understand our technology. It was with this in mind that we released our maze demonstration video this week. The video shows agents in two mazes and highlights the superior performance of the agents that use the collective intelligence of the Fetch.AI network, compared to those that do not.

We have also welcomed a new member to the team this week. David Minarsch joins us as a computational economics researcher. David combines game theory expertise with engineering and product experience in machine learning and blockchain technology.

Today, Fetch.AI (FET) tokens were listed on Coinsuper. The exchange platform will also be holding a trading competition with 50,000 FET tokens to be won.

Last but not least, our chief technology officer Toby Simpson published an article this week describing the inherent difficulties of proving your identity using today’s technology. As we all know, it is a time-consuming process and one that can be vastly improved. ANVIL is a new technology that bridges Fetch.AI and Sovrin that will alleviate many of the flaws of the current system. To learn more about ANVIL, please sign up for the live demonstration on 28 March.

Fetch.AI Ledger Benchmarking II – Single Lane Performance

This is the second in a series of six articles. If you have not read the system overview benchmark, please see Benchmarking I – Overview and Architecture.

In this post, we discuss single shard performance to quantify the maximum expected throughput of the system. We test the system under various conditions and make projections for the multi-shard performance. Working under the assumption that the average transaction sizes are around 2048 bytes, we can show that the testnet achieves peak rates of 30k transactions/sec or more in non-trivial topologies.

The initial set of tests that we completed were conducted using a sequence of nodes connected into a chain. The primary reason for selecting this test is to understand and interpret the results of how the gossip-based architecture of the network performs. The results were generated using a selection of cloud computing resources across the US and Europe. Due to the availability of resources the results were generated with a selection of intel based processor architectures. We chose a network topology as follows:

Figure 1: Here the orange node represents the entrypoint for data and the green node the exit point. We expect that this heterogeneous selection of processors, together with the transatlantic data transfers, more closely resemble main network conditions.

One of the key reasons to look at a chain setup is that for any graph it is to be expected that propagation will be dominated by the shortest chain (under the assumption of roughly uniform propagation times). One of the pitfalls of studying chains is that they do not truthfully reflect the need for handling echoing in the system. Performance analysis figures outlined here should be considered with this in mind.

In this post we will focus on two main processes inside the ledger, which are ultimately network-bound. These are the block propagation and the transaction synchronisation respectively.

Block Propagation  

In this section, we examine the block propagation of our system. Inside our ledger, blocks are gossiped from node to node. The block propagation is an important metric for the system since it will give a lower bound on a block time (the average time between blocks in the blockchain), depending on the consensus scheme.

In a proof-of-work scheme block latency will extend the time miners spend wastefully hashing their blocks, while in elected leader schemes it will delay mining of the next block. This is especially notable in the case where the elected leader fails to submit a block: nodes must have an upper bound for expected block time to avoid waiting forever.

For this test, nodes were connected in the chain topology as described above. The size of the chain was altered from two nodes to seven nodes. Given the setup of the chain, each hop between nodes had a latency of ~100ms (because each node was either side of the Atlantic). This was done to minimise the variance of network latency to the test.

The number of nodes was chosen to reflect the expected topology of miners – for a small world model the expected average path length tends to ln(N)/ln(K) where N is the node number and K is the mean degree. Hence, for a network of 100,000 nodes with an average of five connections, the average path length is ~7.15. In practice, however, we expect nodes to self-organise using the Fetch.AI trust framework, so as to minimise this mean path length.

In this figure below we present the results of the block propagation. The size (in transactions) of the block was varied as well as the length of the chain of nodes.

The block propagation time is linear with the number of nodes as is evidenced by the graph. This is the expected result, especially given the chain topology and the gossip protocol. However, the gradient of the line as the block size increases could be better. Further analysis  points ultimately to the size of the blocks. Planned improvements to our internal serialisation will increase performance.

Transaction Synchronisation

The other major network test that was performed was transaction synchronisation. This tests the propagation of transactions that have entered a single node on the network to all other nodes in the network.

In this test setup it is assumed all valid transactions that arrive at any node should eventually be seen by all nodes. In practice, however, depending on transient conditions, transactions with a very low fee may not be prioritised for synchronisation.

The synchronisation of transactions is important to system performance in two ways:

  1. Transactions cannot enter a block until they are seen by a mining node. Therefore, in reality the latency for transactions to enter the blockchain is a function of their synchronisation speed as well as their attractiveness to nodes (fee). 
  2. Transaction propagation ideally is on the same order of magnitude or better than block propagation; once a block has been mined, it is optimal for nodes to have already received and verified the corresponding transactions before they receive the block. If they are missing transactions, they must query and receive these from the network. As the block cannot be verified until this is complete it reduces the effective useful block time of the network, reducing throughput.

Transaction synchronisation in our ledger differs in two major ways from block propagation: conceptually transactions are ‘pulled’ in blocks from one node to another and that transactions are verified before retransmission. The rationale for a pull-based synchronisation mechanism is that a push-based protocol is likely to suffer from high overheads from message spamming, and that the system is likely to react poorly to high load.

For an initial test, nodes were connected in a chain configuration (varying in length from 1 to 8) and 250k transactions were submitted to the first node in the chain. Nodes were run as separate processes locally on a single server. This initial test was to evaluate the performance of the system in this best case scenario where the network latency was effectively zero. This approach was to verify our assumption that the system performance in this setup would be CPU bound due to the process of verifying transactions.

This can be seen from the figure above – doubling the number of CPUs available on the machine resulted in a significant performance improvement. This validates our model that process will be CPU bound. Further execution level analysis also verified that the hot path for each of these processes were calls to OpenSSL cryptographic library. Specifically these were the calls required to verify the signature(s) inside the transactions.

Next, we performed a more realistic evaluation of network delay. A series of chain lengths were selected (2, 3, 4 and 5). For each chain of nodes, individual machines were deployed as a network and 1,000,000 transactions were submitted to the beginning of the chain. In each experiment the time was measured between transaction submission and transaction execution. For this test we were primarily interested in the synchronisation performance, so for this test we disabled actual transaction execution. In this way we more accurate benchmark for this process.

A summary of the observed results is shown in the figure below:

For this set of tests, the system load quickly becomes CPU bound. This is because the transaction verification tasks dominate. This is primarily due to the fact that the transaction sync stage must verify all transactions before they are transferred onto another node. We will be reviewing this design decision in the future. This explains the observed slowdown based on the number of hops that is required to achieve full network synchronisation: as this chain length parameter grows, the effective transaction throughput of the system is reduced, which is the expected behaviour.

When we consider the rates for each 1 second internal, the following distribution is observed for the 5 node chain case:

In summary, our tests have highlighted the CPU intensity of the transaction synchronisation protocol. Given this, the underlying mechanism is able to batch the transaction quite well leading to reasonable average transaction throughput rates.


These tests provide valuable information on the performance of our system. It is clear that careful tuning of the system will be required in order to ensure the network is working efficiently. As discussed, this will be a multi-variable problem depending on the topology of the network, the effective transaction input rate and the computation power available.

It should be noted, however, that we expect this system to scale linearly with the number of lanes (shards) that are present in the system. Due to current limitations of the ledger software, this scaling cannot be tested at this time beyond the modelling already performed. When this feature is implemented, we will evaluate the performance again to validate this claim.

A significant proportion of the computational effort in this subsystem is the verification of the incoming transactions. Any possible future improvements that are made to this process will have a large positive knock-on effect on performance in this subsystem.

The current performance of the ledger with today’s implementation is satisfactory: it provides enough throughput for the Fetch.AI system to function with a suitable margin remaining. We are, though, aware of many areas where further improvements, technology and optimisations can increase the performance significantly. Over the coming months, we will be implementing, benchmarking and releasing these improvements.

Jay Loe joins Fetch.AI as a video and graphics content creator

We’re pleased to welcome Jay Loe to Fetch.AI. Jay’s passion for visual storytelling will see him fit in perfectly as part of the company’s growing marketing team. Since graduating from Anglia Ruskin University with a degree in Film & TV Production, he has worked both as an in-house and freelance video editor/producer, developing video content for a range of corporate clients. In this time he has also had short films screened at local festivals.

Jay has a range of skills including shooting, editing and motion graphics. He will be drawing on all of these techniques to produce compelling content that demonstrates the potential of  Fetch.AI’s innovative technology and is looking forward to the challenge ahead.

If you would like to get involved to help Fetch.AI bring tomorrow’s decentralised, autonomous digital economy to life, please visit our careers page.

David Minarsch joins Fetch.AI as a computational economics researcher

We are thrilled to announce that David Minarsch has joined Fetch.AI’s research team.

David’s experience combines game theory expertise with engineering and product experience in machine learning and blockchain. With a PhD in Economics – specifically in Applied Game Theory – from the University of Cambridge and startup founding experience through Entrepreneur First, David is well placed to contribute to Fetch.AI’s consensus mechanism, smart ledger and autonomous agent design research. Product experience in his own startups have taught David how to deliver value to end users without getting lost in building white elephants.

Fetch.AI is the first project in the blockchain space David has come across with a focus on fundamental questions in economics and game theory. He is excited to work alongside our outstanding team of researchers and engineers in his ‘golden triangle of interest’: game theory & mechanism design, Al & autonomous agents and decentralised ledger technology.

If you are interested in being part of building Fetch.Ai’s decentralised future please visit our careers page.

How does a maze help to demonstrate Fetch.AI’s technology?

At Fetch.AI we are creating the infrastructure for tomorrow’s digital economy. We remain well on course for our mainnet launch later in the year, but this has been no easy task. Nor should it be – otherwise everyone would already have developed the technology themselves. Similarly, the concepts behind the technology of the future aren’t always straightforward for newcomers of the project to comprehend.

Fetch.AI operates at the convergence of innovations in artificial intelligence, machine learning and blockchain technology. On the network, Autonomous Economic Agents (AEAs) represent individuals and their assets, product manufacturers and service providers. These can interact with each other to execute economic exchanges without human supervision to construct solutions to complex problems. The adaptive and self-organising collective intelligence of the network enables this to take place.

In the Fetch.AI ecosystem, agents are able to communicate with each other using a universal language. This allows the agents to find each other, broker deals and complete transactions to find answers to problems humans would not have the time nor cognitive ability to solve by themselves. The new digital economy will need many of the same features that we see in existing markets such as communication, negotiation and trust. To demonstrate these features, we studied agents attempting to find an exit from a maze. In these simulations, all agents use a depth-first search algorithm to traverse the maze.

If we extend the simulations to many agents, their inability to coordinate leads to a wastage of resources. When an agent finds a way out of the maze, it doesn’t affect any of the other agents in the network as data cannot be exchanged. The crucial information that has been uncovered is immediately lost and the remaining agents continue their isolated efforts to find the exit. This is depicted in the video by the behaviour of the agents in the maze on the left of the screen.  

By contrast, the maze on the right side of the screen during the video illustrates what happens when we use the Fetch.AI framework to enable value exchange. Fetch.AI’s machine learning infrastructure enables collective intelligence. This allows agents to communicate with one another to increase economic efficiency. Deciding who to trade with, how and when to negotiate and determining the correct value for the information are all complex problems. When an agent successfully locates the exit, the yellow arcs show the value and knowledge of the exit being communicated between the agents and their movement being coordinated towards it as a result.

The Fetch.AI framework enables agents to represent a limitless number of people and assets. Machines fitted with agents will be able to communicate with each other, trading data from a huge range of sources to find the best solution. But let’s keep this simple for now by using a straightforward example. An agent in a dishwasher would be able to trade and negotiate with energy providers, whether they are localised networks or the national grid, to discover the cheapest time to function. The agent would then be able to autonomously switch the dishwasher on and off at the correct times. This benefits both the owner of the dishwasher and the energy provider. The owner of the dishwasher returns home to clean plates having spent the minimum amount on energy. Meanwhile, the energy provider has been able to minimise peaks and troughs in demand by deploying an agent to operate autonomously on the grid.

The effect of this technology can be multiplied across numerous household devices thanks to our unique, scalable ledger. By trading data, they can work together to provide a seamless, contextualised experience. The benefits of such technology would also radically improve efficiency in sectors such as transport and supply chains.

Removing the honesty box from the economy with an ANVIL

Trusted identity is important. Is Alice, your new Facebook friend in France, really a 24 year old account manager, or is she Steve, who is extracting your identity information piece by piece until he can lift the contents of your bank account? And talking about Steve, your LinkedIn connection who is a VP of Research and Development at an upcoming technology company in San Francisco, how do you know he isn’t Simon, a 18 year old from Doncaster in the UK who hates you for some reason and is biding his time before making a move? Then there are the claims made against your identity. How do you prove you have a job, what your salary is, your age, whether you are insured, whether you have a driving licence or if you do indeed have the university degree your resume claims that you have? That’s a lot of paperwork to find, store and produce when it is needed. Furthermore, lots of it is easy to forge and exists as a one shot wonder.

When you hand your driving licence to a car rental company in another country, how do they establish if it is valid and not revoked? They’ve got a lot to do: they have to prove your identity and that the claims you make are both true, and attached to you, before giving you the keys to the car. Of course, in the end, they simply don’t aim for perfection, they have insurance against things going wrong so they just have to make best efforts. In the meanwhile, if you are trustworthy, you are handing over a lot of personal information, most of which they didn’t need, to a third party that you know neither personally nor professionally. Who’s to say that this information will be stored securely and used wisely? In Europe, GDPR provides (Which I’ve heard referred to as “Good Data Practice Rules”, an endearing alternative wording for the acronym) regulation to protect your data, but as I’ve often heard, red traffic lights don’t stop cars and mistakes, as well as “mistakes”, do happen.

Your life is full of cases where you have to prove who you are and then make claims against that identity that themselves have to be proved. None of these claims are owned or controlled by you which is why you have to do so many checks at the bank when you open a bank account, and then do them again at the bank next door when you pop over there to open a savings account. There’s also the mysterious case of providing too much information. When you rent a property, there’s a huge gap between what the renter needs to know and what you have to provide to prove that. The renter needs to know that a) you are employed and have been for at least six months, b) your salary is at least, say, £2000 a month, c) you are at least 18 years of age and d) you are entitled to live in this country. That is four simple things. What they get, though, is a) your full employment information, b) your actual salary from payslips including tax identity, social security or national ID numbers, c) your date of birth and d) your passport with any relevant visas. In short, they now have the keys to your digital life. Which is worrying for you, but also a massive pain for them, as they have to check each and every piece of information you give them because there is no trivial way of knowing whether the claims you made are complete fiction or not. Payslips and employer information, fine, but you might have been fired that morning. The utility bills to prove you’ve been in the country and are a responsible citizen are, yeah, pretty easy to fake: there are websites for that. Identity information? There’s a bloke at the pub who’ll… well, “help you out”, for a few hundred pounds. Checking presented credentials and claims takes time, costs money, and involves effort from lots of people and entities — both those that charge a fee and those who feel obliged but really do have better things to do than respond to constant reference and verification requests.

A decentralised, machine-readable solution

There are problems on all sides of the fence. Identity has to be proved. Claims against it have to be checked to see if they are true, have not been revoked and are indeed against that identity and not another one (who’s to say if the degree certificate is yours, and not someone else’s?). Couple this with the vast amount of over-information provided to prove one small thing and we have a mess, which is hugely expensive, inconvenient in your day to day real-life and generally just a massive pain for everyone involved.

Imagine, though, if instead of relying on easily forged, hard to check documents, or third parties outside your control for such things, that these certificates were owned by you: machine-verifiable and trivially easy to check by those to who you presented them that they were true, unmodified and not expired. If blockchain could be combined with other technologies to provide a decentralised trust system that was owned by no single company or government where individuals became their own record keepers and are empowered to deliver claims made against their identity.

This is about trust through verifiable credentials built using a cryptographic technology called zero-knowledge proofs. When you say you have something, are something, or are entitled to something then when you make those claims, the person or entity hearing them should be able to independently verify them and do so by machine, without human involvement. If those claims involve a temporal component (e.g., they can expire, change or be withdrawn) then they should be revocable. This includes insurance, your five star food hygiene rating with a local authority or the fact that you have a clean driving licence. In short, when someone makes a claim, you need to be able to establish for sure that the claim is valid at this precise moment and applies to the identity making it. This also solves the promiscuous nature in which we are obliged to share considerably more data than we’d like to, or indeed needto, in order to prove such things. Now you can prove you are at least 18 without disclosing your age or date of birth, or that you earn at least £2000 a month without providing the actual value, or that you hold a valid EU passport with at least six months left to run without providing it. In short, you can now prove something without revealing the details of the claim you are making. Neat, eh?

Lubricating the digital economy

When you’re able to do such things, then a massive block in the digital economy is removed. When such claims can be attached to digital identities as well as human ones, then something else happens: you enable autonomous agents to get trusted work done on behalf of what they represent — independent of human interaction. When all this is mixed together in an architecture like Fetch.AI, it enables decentralised applications that were simply not possible before. Let’s take a decentralised Uber: you, as a passenger, can verify that the driver is authorised by the local authority to pick you up and that the car you’ll be picked up by is insured, taxed and serviced. As the driver of the car, you can verify that the person meets age restrictions, is where they say they are and can afford to pay for the journey. Agents representing the driver and the person work together to validate these claims before the pickup is chosen. In the meanwhile, the Fetch.AI digital world coupled with synergetic computing have worked to optimally connect driver to passenger and have planned the best route for achieving an efficient journey. When the same network is also taking advantage of journeys to move packages and food around, multiple verticals are seamlessly integrated improving utilisation, efficiency and putting control over earnings into the individual’s hands. This is an absolutely vast change enabled by autonomous economic agents in a world driven by AI, structured for machines and supported by verifiable credentials that can cross the boundaries between markets in real-time. It is a disintermediation of the economy, one that returns control to the individual and removes some of the middlemen that have shoe-horned themselves into the world in recent years.

Such claims will be passed around between autonomous agents continuously. When using synergetic computing to optimise the connection of electric vehicles and charging points, a vehicle will be able to prove it is serviced, is compatible and is where it says it is. In the meanwhile, the charging station can prove it exists and is approved by the manufacturer to be used with that vehicle. This creates a world based on trust, where it is not possible to assert a claim that has been made up. Removing the need for human checks or involvement of third parties makes this work for machines, autonomous economic agents, that can get on in this new, trusted world by themselves. With the global ledger used for certificates and logging transactions, there is increased information available for machine learning to present trust values directly into the digital world: you can cut out all agents representing charging stations that are not officially certified to charge your car, or remove any food provider which does not have at least a four star hygiene rating.

As we highlighted in our Technical Introduction Paper over a year ago, trust and reputation is vital to allow both agents and users to transact with the least risk: section 3.6 talks about just some of the ways this valuable information is delivered to the users, both digital and human, of Fetch.AI

You can trust me, I’m in crypto

So let’s be frank on a truly grand scale: the whole point of true decentralisation is self-service trust. You should be able to transact without a centralised source of information to tell you that it is safe to do so. Behaving well over a period of time provides reputation, and working with other parties with a long, established reputation of trustworthy operations lets you work free of fear. How you deliver this trust and reputation, though, is where it gets a touch more difficult. How many of those transacting on bitcoin or ethereum check their facts on the ledger? They could, but do they? What happens if it is not just people, but machines, too? How does anyone (machine or human) get access to the trust information that they need in timely, and of course, trustworthy way? There are a lot of questions there and none are trivial to answer.

Fetch.AI built a system specifically to ensure that digital entities had the information necessary to work safely whilst keeping their risk exposure to a level that they were satisfied with. Part of this is enabling the construction, delivery and use of verifiable credentials: you are who you say you are, you have what you say you have and you’re qualified to do what you say you’re doing, for example. Then there’s friction-free access to reputation and other key network trust figures: what’s the risk of connecting to this node, or talking to this agent? We provide high-level command APIs through the Open Economic Framework (the gateway to the network for agents) that deliver this information and agents need not just trust one node, they may connect to several to get a weighted answer. Agents that wish to work in a more trusted environment can refine their view on the digital world not just semantically, economically, temporally or geographically but by risk level — effectively making poorly behaving nodes and agents invisible to them.

Enabling a high-performance machine-to-machine economy where billions of agents can do business in a trusted environment without human supervision or intervention is supported and achieved with many technologies. Some of these are at the protocol level, some at the higher level and in some cases, the integration of other technologies that bring in expertise and experience from outside can make a significant difference to the level of service that agents have.

Sovrin, Outlier and Fetch.AI: friends in high places

And this is why Outlier Ventures have built ANVIL: a bridge between Sovrinand Fetch.AI to allow agents to assert a claim and for those claims to be verified. This example of integration between the two systems enables a range of very, very cool things. Now, agents can change what they deliver according to the credentials of the buyer. Think of being able to provide the red-carpet service to those that are qualified to receive it: travel agents being able to assert that the person they represent is a silver member of the frequent flier club without disclosing the membership number, hotels offering discounts to regular customers or special rates for elderly people, students or healthcare workers. It deals with an electric car charger provider verifying that the car attached is compatible, and also the car can verify that the charger is real and approved for them to use. All the agents involved can prove what they need to prove in order to get things done and do so without any of the pain and agony that is usually associated with such things, such as over-disclosure or the painful, slow requirement to verify each thing individually with centralised third parties.

Verifiable interaction with the ANVIL API in just 8 lines of Python. Yup, you heard me, 8 lines of Python

From the 28th March, you’ll be able to get the code, see it work, and adapt it to your own requirements. ANVIL further enables Fetch.AI’s economic Internet by delivering Sovrin’s self-sovereign identity technology as an additional tool in the box: giving people and things the ability to collect and hold their own digital credentials. Watch the live demonstration of ANVIL.

This is a major achievement in demonstrating the economic Internet of the future and how straightforward it can be. Here, we provide one glimpse of how trust, identity and proof-of-claims can be handled in Fetch.AI. It’s not just an idea, or some thoughts, it’s real, it works, and you can use it to let Fetch.AI agents on the testnet get things done in a trusted, leak-free, decentralised environment where the agents and those they represent own and control their own digital credentials. Get involved, take a look, and let’s BUIDL.

Fetch.AI weekly newsletter #027 – A new consensus whitepaper

Hello from the Fetch.AI team,

It has been an exciting week for releases. This week we launched the FET wallet, which is now available for iOS on the App Store and for Android on the Google Play Store. The test network is open to everyone who has Fetch.AI tokens and we encourage you all to take a look. To make this as easy as possible, we’ve written a guide on how you can use the FET wallet and our community site to gain access.

We also unveiled our groundbreaking consensus whitepaper, A Minimal Agency Scheme for Proof-of-Stake Consensus. This outlines how we are developing the world’s fastest, most secure decentralised ledger.

You can find out more about how we are overcoming the blockchain trilemma by reading our Medium post. Alternatively, you can watch our CTO Toby Simpson explain it in a nutshell below:

Last Sunday, Toby was part of the judging team for the Cambridge Bradfield Hackathon that took place over 24 hours. It was great to see so many ideas for applications of distributed ledger technology being brought to life and it was wonderful to have the opportunity to work with Cambridge Blockchain Society. We enjoy working closely with the developer community and are looking to participate in more hackathons in the near future.

This week we were pleased to announce that Fetch.AI (FET) tokens are now listed on CoinAll. The cryptocurrency exchange is holding a seven-day celebration featuring a token giveaway to mark the occasion. There is also an ongoing trading competition on Kucoin.

Today we hosted a live AMA with CEO Humayun Sheikh alongside Toby. Together they answered questions on a range of topics. See a full recording of the event below:

We also checked in with artificial intelligence researcher Ali Hosseini who is working on making Multi-Agent Systems a reality.

And finally in other company news, I have joined the marketing team. It’s been a fantastic first week getting to grips with all the exciting developments we’re working on. I can’t wait to share them all with you. To find out a bit more about me, read my new starter blog post.

What are you working on? Checking in with Ali Hosseini

Hi, my name is Ali. My areas of interest are Multi-Agent Systems and Artificial Intelligence in general. At Fetch.AI, I’m currently working on making multi-agent systems a reality. For the uninitiated, let me explain what I mean by agents and multi-agents.  

A computational agent is a software system that represents an entity and is capable of autonomous action on its behalf. Each agent is designed to look after the interests of the entity it represents. This could be an individual, a team of people, a whole organisation, or even a government. Agents are typically situated in an environment, and have the means to sense and act in this environment autonomously, i.e., without their owners continually telling them what to do.

The thermostat in your house is an example of a simple agent that looks after your (temperature related) preferences and adjusts the temperature of the house (hopefully!) without you constantly getting up to change the temperature settings. Of course, agents may be much more sophisticated and capable of performing more complex tasks.  

In many scenarios, there may be more than one agent in the environment, each representing a different entity with different, often conflicting, interests. Therefore, each agent performs actions that changes the environment to suit its own benefit. One example is the complex world of transportation, where every driver’s personal (read selfish) goal is to get to their own destination as quickly as possible. Also note, that the actions of each driver affects the environment, which subsequently impacts the experience of other drivers. For example, a car joining a motorway from a slip road changes the space available for those already on the motorway, starting with the cars driving on the furthest left lane.

Moreover, many of the goals that an agent sets for itself consist of tasks that the agent may not be able to complete solely on their own. If every driver takes the fastest route to their destination, with no consideration for the intent of other drivers on the road, we can all imagine what the outcome would be; traffic, deadlock, accidents and delays for all.

In the Fetch.AI digital world, many agents co-inhabit an environment. Each agent is designed to look after the interests of the entity they represent and these interests often conflict with one another as they cannot all be fulfilled at the same time. Equally, some of the goals the agents define for themselves to promote their personal interests cannot be achieved in isolation. This scenario is quite similar to a human society. Just as we cooperate and reach agreements with others in everyday life, the agents also need to have the capability to coordinate, cooperate and reach mutually acceptable agreements in order to achieve the outcome sought by the agents and their owners.

Drivers may not be able to achieve their goal of getting to their destinations quickly without coordinating their manoeuvres with other drivers. This coordination could be in the form of a central entity (e.g., a central traffic control unit) introducing all-encompassing regulations (e.g., blocking specific roads at particular times to ease the flow of traffic), or emerge purely out of a peer-to-peer interaction, such as by giving space to a car that is joining your lane.

Much of the research and development work I do at Fetch.AI relates to the problems I described, broadly falling under the following two categories: a) the design of individual intelligent agents capable of acting autonomously on their owner’s behalf, b) the design of technologies that facilitate the cooperation and coordination of autonomous agents, with selfish and often conflicting personal agenda, in a multi-agent environment.

If you’re interested in getting involved in the creation of a decentralised digital world of autonomous agents, please visit the careers page on our community site.

Fetch.AI announces breakthrough in solving the ‘blockchain trilemma’

Minimal agency consensus scheme preserves decentralisation in a high performance distributed ledger

Fetch.AI today announces a technical breakthrough on the challenge of the ‘blockchain trilemma’, with a new approach to consensus. The consensus uses a Proof-of-Stake (PoS) scheme that achieves strict transaction ordering, fast confirmation times and improved security compared to existing platforms.

The novel approach to achieving consensus includes the following innovations:

  1. A Decentralised Random Beacon that elects a committee of nodes tasked with reaching agreement on the validity of a set of transactions. This adds additional cryptographic measures that ensure decentralisation whilst preventing individual nodes from being able to interfere with or delay the progress of the blockchain.
  2. A DAG (Directed Acyclic Graph) but without the delay in reaching finality that is common to other DAG-based systems. Rather, the ‘leader’ of the committee (securely and randomly elected) is able to deterministically construct a strict ordering of transactions for the next block to be added to the blockchain from the partially ordered transactions contained in the DAG. The deterministic mechanism for achieving transaction finality combined with the scalability and transaction throughput of the ledger design will enable it to outperform existing systems.
  3. The overall design of the consensus mechanism, including the decentralised random beacon, collaborative block production and deterministic  block mapping, focus on achieving ‘minimal agency’, which contributes to improved security by reducing the influence any one node can have on the transactions that are entered into the blockchain.

Jonathan Ward, head of research at Fetch.AI commented: “Blockchain technologists have long understood the ‘trilemma challenge’ of achieving the correct balance of security, decentralisation and scalability. Our new approach to achieving consensus makes use of a decentralised random beacon, which in turn allows us to harness a truly decentralised DAG and deterministic transaction ordering for the first time. This means we can scale beyond today’s existing systems without compromising significantly on security or decentralisation”

Ward continued: “This is a landmark achievement in the development of our technology. It’s going to be exciting to see the consensus implemented on our scalable ledger, which is capable of synchronising 30,000 transactions per second on a single shard.”

Humayun Sheikh, CEO, Fetch.AI added: “Any consensus mechanism needs to be specific for the purpose of its distributed ledger. Fetch.AI’s ledger underpins deployment of ‘multi-agent systems’ where AI agents undertake large numbers of low value transactions (e.g. trading data from a sensor). We believe our new consensus mechanism makes a significant contribution to blockchain infrastructure, making DLT fit for purpose in high performance use cases.”

Fetch.AI also recently made its test wallet available via both the Android and Apple app store. With the test wallet users can begin to make economic value transfers with test FET tokens.

About Fetch.AI
Fetch.AI is based in Cambridge, UK with development talent across the globe. Fetch.AI enables the deployment of complex multi-agent systems (MAS) over a decentralised network and provides tools to enable the construction of intelligent agents. Fetch.AI delivers a unique, decentralised digital world that adapts in real-time to enable effective, friction-free value exchange. Powered by innovations such as the Smart Ledger, Fetch.AI has digital intelligence at its heart: delivering actionable predictions, instant trust information and enabling the construction of powerful collaborative models. With unrivalled performance and scalability, Fetch.AI is the missing critical infrastructure for tomorrow’s digital economy.

Fetch.AI’s Minimal Agency Consensus Scheme

An integral part of the Fetch.AI vision, that we have so far kept under the radar, is our technology for providing the world’s fastest, most secure and most decentralised ledger. Consensus is, of course, a vital part of any ledger design, and I am delighted to be able to announce the release of our minimal agency consensus protocol, which combined with our scalable ledger, enables Fetch.AI to overcome the blockchain trilemma.

We have already succeeded in developing a sharded blockchain that can process up to 30,000 transactions in each shard. By running many of these shards, we will be able to offer vast and scalable throughput that can provide the truly low-cost transaction and smart contract platform that will allow us to achieve our ambitions for autonomous agents.

The blockchain trilemma states that scalability can only be achieved by sacrificing either security or decentralisation. The idea behind our protocol is to reduce the “agency” or ability of block producers to control which transactions are entered into the blockchain. This ensures that the participants in the network are forced to behave in a very restricted way, and endows the ledger with similar cryptoeconomic security and performance characteristics to centralised systems. At the same time, the combination of this consensus scheme with a sharded blockchain enables everyone to participate in the network without needing to buy specialised equipment or to be very wealthy.

Consensus design is a technical subject, so we’ve used a Question and Answerformat to explain the most important aspects of the consensus in a way that is intended to be accessible to non-experts. You can find more details in our yellow paper.

Can you briefly describe your consensus mechanism?

The protocol uses a Proof-of-Stake mechanism to construct a blockchain. The ledger nodes that add each block are elected using a decentralised random beacon (DRB). This is a cryptographic technique that enables the ledger nodes to collectively compute a random number in a way that cannot be controlled (or stopped) by any single participant in the network. The DRB is used to elect a committee that propose transaction sets that are stored on a Directed Acyclic Graph (DAG). A leader then closes the DAG and these transaction sets are converted to a block, which is added to the chain. The DAG is a transient (non-permanent) data structure and does not need to be kept indefinitely.

(a) A blockchain is a linked list of blocks. (b) In Directed Acyclic Graphs (DAGs) the blocks have more than one connection to blocks that were added previously to the chain. The two data structures have distinct properties that are useful for different purposes. Blockchains have strict ordering, which is useful for providing rapid and secure finality while the DAG can be modified asynchronously. A transient DAG is used in the Fetch.AI consensus but this is later converted to a blockchain.

useful Proof-of-Work (uPoW) procedure is used to solve the problem of efficient transaction execution. We have extended this concept to a larger range of problems and now refer to it as “synergetic computing” to reflect the wider scope of the technology.

What is the philosophy behind the consensus and how does it improve on what already exists?

The fundamental design principle is minimal agency, which means that individual nodes have little direct control over the transactions that are entered into the global state. The way this is implemented is that the nodes submit candidate transactions but the choice over which of these is added to the global state is specified by a public, deterministic algorithm implemented in the Fetch virtual machine. This has many advantages over other protocols, particularly in resisting transaction censoring or denial-of-service attacks. This allows security to be achieved without compromising transaction throughput.

How does someone participate in the consensus?

Users can participate in the consensus by purchasing a stake and turning on their computer. The ledger is designed to achieve high throughput by running a small number of shards on a desktop PC. The consensus is also compatible with a ledger running a larger number of shards for higher throughput. At higher throughputs, people will still be able to participate in the consensus by running a subset of the shards on their desktop.

Doesn’t Proof-of-Stake mean that the rich get richer? How do you prevent that from happening?

It’s important to understand what Proof-of-Work and Proof-of-Stake mean from a security perspective. In both cases, they represent a cost for participating in consensus. If you sum up the costs of all participants then you have a measure of the cost for an attacker to take over the protocol. The best outcome that any secure consensus design can hope to achieve is therefore to lower the barriers to entry and to allow as many individuals as possible to participate in the consensus. The advantages of Proof-of-Stake are that an attacker must purchase a larger number of tokens on the open market than the existing stakeholders, which will typically involve a much higher cost. The other important feature that is necessary to prevent the “rich from getting richer” is to ensure that the rate of return and risk (i.e. variance) in rewards is identical for both smaller and larger stakeholders. Our protocol design fulfils all of these requirements, which will prevent centralisation of tokens.

How do you deal with a nothing-at-stake attack?

The blockchain incorporates a deterministic notion of finality, which means that for nodes following the protocol, the chain cannot be modified beyond a certain block depth. The blocks are also created collaboratively by a randomly-selected committee of nodes using a DAG. Attempts to add multiple vertices to the DAG can be detected easily and resolved by fining the perpetrators (this is known in the field as stake slashing). The design incorporates a novel algorithm that makes generation of forks in the DAG by attackers easily detectable by honest majorities.

How does your protocol solve the scalability trilemma?

The scalability trilemma states that it is difficult to simultaneously achieve the properties of scalabilitysecurity and decentralisation in any ledger technology. We believe that our consensus and ledger design represents a viable solution to the scalability trilemma.

● Scalability is provided by the sharded ledger, which delivers scalable transaction throughput.

● Decentralisation is ensured by node operation being open to everyone and not requiring any specialised equipment.

● Security is provided by the minimal agency design, which restricts the potential for attack and enables non-zero transaction fees to be levied on users without artificially constraining the system’s throughput.

Why do you use a decentralised random beacon (DRB)?

We use Proof-of-Stake and a decentralised random beacon for allocating responsibilities to different nodes at different times. This is an energy-efficient, secure and scalable alternative to Proof-of-Work that we believe is likely to become an industry standard in the coming years.

How does your protocol improve over others that use a DRB?

The main difference is in design philosophy. Most of the other projects use classical theories of distributed computing and cryptography. These are based on the concept of an adversary who is endowed with a specific set of capabilities. The goal is then to design the system to be resistant to the specified attacker. In our case, we are still using the classical adversarial models but we are also treating the blockchain as a multi-agent system where all of the agents (in this case processing nodes) are self-interested, and security is additionally provided by making any deviation from the protocol costly for the participants. This is similar to how the existing economy works; most people trust Visa with their transactions or Google with their data because they know that the financial cost of abusing that trust makes it unprofitable for those companies. The profit incentive combined with their reputation for reliability gives consumers confidence that these firms can be trusted. The minimal agency principle also provides protection against many different types of misconduct such as bribery or fraud that any node can successfully achieve.

Your protocol incorporates a DAG. What are the advantages of your consensus scheme compared with other DAG-based ledgers?

A disadvantage of DAG-based protocols is that they might sometimes lead to transactions taking an unreasonably long time to be accepted into the ledger; an issue that is known technically as liveness. One way of resolving this problem is to use a coordinator to snapshot the DAG at different time intervals. Unfortunately, protocols that use this approach are effectively under the control of the coordinator and therefore centralised. DAG-based platforms are currently unsuitable for smart contracts as they do not provide strict ordering of transactions. They also contain redundant information that needs to be stored indefinitely. We believe that our protocol delivers better liveness, scalability, security and smart contract systems than any existing ledger that is based solely on DAGs.

How does your protocol remove the need for a central coordinator?

The decentralised random beacon elects a committee of leaders who fulfil the same role as the central coordinator. The random nature of the leader election, the cycling of the leadership role and the minimal agency design provide greater security than a centralised alternative.

How is your protocol different or better than existing scalable blockchains?

An important difference with most of the existing scalable blockchains is that we do not impose any restriction on the number of nodes that can participate in the consensus. Several prominent cryptocurrencies deliberately limit the number of node operators to enable higher throughput but this means that their ledgers are more easily controlled by centralised cartels. In many cases, the situation is made even worse by the node operators being able to extract very high rents through inflationary block rewards. This will cause even greater centralisation of wealth in the future, very much to the detriment of other users. The Fetch.AI token is non-inflationary, which protects the value of users’ holdings.

What is the difference between Fetch.AI and other AI/ML blockchain projects?

Fetch.AI is a state-of-the-art cryptocurrency project whose goal is to build the fastest, most scalable and secure ledger in the world. This differs from the other projects whose focus is more restricted to AI applications. Another important difference is that Fetch.AI is building a platform for multi-agent systems, which we believe is the most appropriate scientific formalism for building blockchain applications. We’re planning on using machine learning in the Open Economic Framework (OEF) for matching trading partners and for intelligent search and discovery, but this is with the aim of supporting the multi-agent system development.

What are your future plans?

We’ll be releasing more details in the coming months, including academic papers, details of patent applications, articles for the general public and benchmarking of the consensus’ performance on the test network.

It sounds like you are making good progress on the ledger itself. How does this affect the economic platform for autonomous agents?

There is a synergy between these two elements. The Open Economic Framework (OEF) depends on the blockchain to act as the medium for financial exchange, while the OEF provides cryptoeconomic security to the blockchain. It’s going to be exciting to further develop the connections between these fascinating technologies.

Thanks to David Galindo, Troels RønnowMarcin Abram, Jin-mann Wong, Daniel Honerkamp and other colleagues at Fetch.AI for their help in preparing the consensus document.