Development Update

Dear Community,

We had hoped last week, at the conclusion of August, to share with you news of our latest development progress, however, due to it coinciding with Craig’s completion of some crucial work, we decided to wait a short while longer so that we could announce a significant achievement. Thanks to Craig’s efforts coding our new Shift Cluster technology, we can now say with confidence that we are still on schedule to meet our Q3 release target. To refresh your memory, and for those of you who are new to the Shift community, prior to the beginning of Q4, 2018, we will release our own, custom-built Shift IPFS Cluster Pin Manager onto the Shift testnet. Owing to the non-monetization of our testnet currency, this means that the world will be able to enjoy a free to join (for the time being), digital storage medium and hosting service that is blockchain secured by a function-rich Shift core. Furthermore, and as a special surprise for our devoted community, the interface of this new blockchain integrated storage solution will be granted an even greater degree of accessibility and usability by the release of a new version of our Shift wallet.

In order to give you, our community, news of recent developments as they occur, we’ve decided to divide our summary into two distinct newsletters, one that covers the progress made on the Shift IPFS Cluster Pin Manager, and another that explores our advancements in the sphere of blockchain integration. This first newsletter, therefore, is devoted to progress made on the Shift IPFS Cluster Pin Manager, while we will publish news of the blockchain integration progress in a week’s time.

Shift IPFS Cluster Pin Manager

As mentioned in the development update of August 14th, the Shift IPFS Cluster Pin Manager is composed of two key components: our own peer-to-peer library that comprises the Shift Cluster core, an alternative to the libp2p and devp2p libraries used by IPFS/IPFS-Cluster* and Swarm** that make up the foundation of their software, as well as our own cluster protocol that is built on top of the Shift Cluster core. The cluster protocol integrates IPFS with the core and, as a result, allows peers to join and connect to the IPFS swarm, our private IPFS network. It is the cluster protocol that is responsible for the pinning and unpinning of content on IPFS.

In the last development update covering the Shift Cluster’s technology, the peer-to-peer library was nearing completion. We can now officially state that development on that task has concluded, meaning that the Shift Cluster core has now, essentially, been completed to the point where we are able to integrate it with our private IPFS network. While this does not mean it has been perfected, deployment on the testnet being the next step in reaching such a state, we are satisfied that it is now fit to be publicly demoed and tested.

In order to provide clarity on what exactly has been accomplished, let’s review the developments that have been made on two subtasks of the peer-to-peer library section contained in the August 14th newsletter, that have now been removed from the ‘To-do List’.

  • Support using a cluster secret to limit who can join.This task has been completed and the process works as follows: The cluster secret has been set as a parameter pre-included in the node install script that was activated when we created our cluster instance. If a person would like their node to join the cluster, a bootstrap call (join request) is submitted to any other node that is already a part of it. That node will then verify if its own cluster secret matches the one possessed by the person’s node. This mechanism prevents nodes that have not installed the Shift Cluster software from joining the cluster by faking the bootstrap call. The first peer, deployed by us, that started the cluster did not bootstrap to any node, so the secret within its configuration file is that which has determined the one that other nodes will have to possess in order to make successful join requests.
  • Make sure peers are removed automatically when they are offline.In order to expedite our move to the testnet stage, we have made the decision that, as automatic removal is not critical for a successfully operating system, our objective of launching by the end of this month would be best served by resuming development on this functionality at a later date. Unlike IPFS-Cluster, where the issue exists that if any peers are offline, the bootstrapping of new peers fails. When peers that are part of Shift Cluster go offline, this problem does not arise. This is because we have designed our system so that online peers will simply ignore those that have gone offline in such circumstances. We will definitely add automatic removal in the future, as this will allow us to prevent the possibility that the peer list could become crowded with inactive nodes, but, for now, we believe that its absence will not cause any issues.

With work on these two tasks now no longer on the agenda, those remaining from the ‘To-do List’ published within the last development update relate to coding left to be carried out on the cluster protocol. These tasks include:

  • Broadcast/bootstrap new peers to the IPFS swarm.
  • Issue pin and unpin commands to specific peers in the IPFS swarm.
  • Monitor data from IPFS swarm.

Now, with the cluster core essentially complete, Craig was last week able to start on the cluster protocol. The cluster protocol functions principally as a wrapper*** built on top of the IPFS API for the issuance of pin and unpin commands to the IPFS swarm peers. However, as a convenience, we have built it in such way that it also handles new cluster peers joining the IPFS swarm.

For those of you who are unfamiliar with our usage of the terms ‘cluster’ and ‘swarm’, as well as the networks themselves, for the sake of clarity we would like to provide the following, brief explanation, before then discussing this first subtask itself:

There are two different networks that comprise our system for managing the storage of data. First, there is the Shift Cluster, a network of peers that are able to communicate with each other in order to handle the bootstrapping of new peers to the network. This ongoing process is regulated by the cluster core’s software, comprising the peer-to-peer library. The second network is what we call the IPFS swarm, a name we use to refer to our own, private IPFS storage network. The IPFS swarm is the actual network of digital media storage devices on which content is pinned. Usually, peers that are part of the cluster are also those that are a part of IPFS swarm. You may ask yourself, why then is there a need for two different networks if they both consist of the same peers?

The answer to this question relates to our mission to create the private IPFS network that will best serve our clients. Coexisting within a public, global network would mean accepting the heightened risk that content could be wiped by the garbage collectors, designed to automatically clear memory should the availability of space reach a certain threshold, of a provider with no incentive for maintaining it. Our clients will not have to be concerned about such a possibility, as the cluster functions as a “gatekeeper” by ensuring that those who are part of the swarm conform to the prerequisites that we have set.

With Shift’s private network peers need to be bootstrapped, and that’s where the cluster’s gatekeeper function comes in. So, if you, the user, wish to join our storage network and allocate drive space for use on Shift, it is first necessary to join our cluster (the first network), before then going on to join our swarm (the second network), which is the actual network that is using the InterPlanetary File System to store data.

By automating the connection process so that a user who starts the Shift IPFS Cluster Pin Manager software will first join the cluster and then join the swarm, we believe we have designed a convenient process that is made yet more efficient through its inclusion as part of the cluster protocol. This was the first functionality of the cluster protocol that Craig started working on and we are happy to have found that, within an incredibly short space of time, we are able to say that the setup process of the cluster and IPFS swarm is now working and has been successfully tested in our private testing environment by running it on different peers in different locations.

To fully illustrate this procedure, we can outline the process by which a peer is automatically added to the swarm when joining the cluster as follows:

  1. The IPFS swarm peers all have an IPFS peer ID which is used to connect to the IPFS swarm. However, the cluster peers themselves do not have any knowledge of the IPFS peer IDs, at least not initially. But with the IPFS daemon running on the same machine (localhost), a peer can use an API/RPC call to their local daemon and determine their IPFS peer ID.
  2. Each cluster peer keeps a list of other cluster peers.
  3. If Peer A is a part of the cluster, and Peer B wishes to join, then Peer B first bootstraps to Peer A (which can be any node).
  4. A successful bootstrap call returns a list to Peer B containing all other cluster peers.
  5. With that list Peer B picks a random peer (let’s say, for the sake of convenience, Peer A) and asks Peer A directly for Peer A’s IPFS peer ID.
  6. Peer A on the cluster then asks the local IPFS daemon (running on the same host) what its own IPFS peer ID is, retrieves it and then sends it back to Peer B.
  7. Peer B makes a different call to the local IPFS daemon and uses Peer A’s IPFS ID to connect to the IPFS swarm that Peer A already is connected to.
  8. Peer B is now connected to the IPFS swarm too and gets its own IPFS peer ID.

The work Craig has completed in this field is a huge achievement because, when we began, whether or not the theoretical solution we devised would work in practice was an unknown. While there remains some work to be done, such as the completion of the system for the issuance of pin and unpin requests, as well as the integration of the IPFS daemon calls required to receive statistics for the cluster, implementing these into the cluster protocol will actually be relatively straightforward. To put it in as few words as possible, we can confirm that, thanks to the effort and ability of Craig, in all likelihood, the Shift IPFS Cluster Pin Manager will be released by the end of this month!


The Shift Team

* For further details, see
** For further details, see
*** For further details, see

If you want to check original article please go to link

Leave a Reply

Your email address will not be published. Required fields are marked *