Fetch.ai Community AMA Recap (25th February 2021)
Feb 25, 2021
Fetch.ai Community AMA Recap with
- Jonathan Ward
- Edward FitzGerald
TIME: 2pm GMT, Thursday 25th February
Q: Is it required for someone starting out to buy a ton of FET tokens to delegate to themselves first? In other words does having more FET as a validator in the beginning automatically give you an advantage over others?
Ans: Ultimately you are correct the more tokens you have associated with your validator the higher the consensus power you will have. This does not mean that you need to have purchased all the tokens yourself. You can get other community members to join your validators pool of tokens. (Edward)
Q: How many validators are going to be selected? To be profitable at 1% of total stake, that means more than 100 validators would be unprofitable. (assuming equal distribution)
Ans: The fetch foundation is planning on delegating stake to a subset of validators to operate the network. This depends a bit on the amount of tokens that are transferred from Ethereum to main-net, and also how much demand there is for validator slots.
That’s a good point. Your returns as a validator depend on your share of the total amount staked and the commission you charge to others for delegating stake. You would need to charge a commission of around 8% and hold 1% of the stake to break-even (at current prices). We could have 100 validators but these would need to all hold exactly the same amount so it seems likely we’ll have a smaller number to begin with. (Jonathan)
Q: Hi team, How will you guys select the validators for mainnet 2.0 genesis?
Ans: So there are a couple of factors here. The short answer is that for genesis we are actually going to reach out to a subset of external validators to be part of genesis.
One aspect of this project which is a little different to other projects is that we have an established ERC20 token pool. Practically this means that we envision that the bulk of validators will need to migrate their tokens across the ETH to FET token bridge before becoming a validator. (Edward)
Q: How much demand has there been so far of subscribed participants?
Ans: We have actually been really impressed with the interest in the validator program. We are looking forward to seeing how many validators make it through the onboarding stages. (Edward)
Q: Will the stake on eth stop and move entirely to mainnet 2.0?
Ans: So you are right we will move it to the main net. Practically this means that we will continue to operate it for a short while after the launch of the main network and then we will run a campaign to migrate all uses to the native staking system on mainnet. (Edward)
Q: What does a subset of external validators mean exactly? Will it be the big established staking companies? meaning smaller validators will hardly have a chance?
Ans: We’re less interested in established staking companies, as they generally only really care about the bottom line. We’re planning on mostly supporting validators that are passionate about the technology and want to help build the ecosystem. (Jonathan)
Q: How many validators have signed up?
Ans: We’ve had more than 200 expressions of interest. (Jonathan)
Q: How many validators will be selected?
Ans: We’re still working on that — it depends a bit on how things go with the test nets we have planned. (Jonathan)
Q: I’m a little confused about the difference between the current FET token and a different ERC-20 token … ETH to FET token bridge mentioned above. Would you please expand on that? Is the execution network moving off Ethereum? (Pardon me if this has been written about, I just haven’t seen that)
Ans: Yes I can elaborate a bit further. Since the FET token is used in a number of smart contracts and applications on the Ethereum network for the foreseeable future we will have FET tokens on both the native ledger and the ETH network.
To ensure users can move funds between both networks. The Fetch foundation will run a token bridge service.
Practically this involves two smart contracts on each network where funds are locked on one side and then released on the other side. This is critical to ensure that the token supply is maintained across both networks. At launch of the main network we will support ETH <-> Native but in time we are likely to extend this bridge to Binance Smart Chain too. (Edward)
Q: Will this specs be enough on a dedicated VPS? CPU IntelCore i7–6700, CPU-Details: Quad-Core, Skylake, Hyper-Threading Technology, Virtualization (Intel-TV), RAM: 64GB DDR4, MVMe SSD 2x512GB
Ans: For a single node that is more than enough. Remember that if you are running a validator which is open to the internet you are opening yourself up to potential DDoS attacks. We recommend that you run a sort of sentry and validator model to provide some level of protection against this.
This machine spec is good enough to run both a validator and a sentry with something like docker for network isolation. (Edward)
Q: Given the level of interest (and difference of experience in running a validator) — is there a plan to deploy to testnet for ALL interested so the community can be engaged in the future of the project even if they are not ‘production’ grade validators?
Ans: Yes, starting from next week we will be reaching out to the whole validator community to onboard on the Beacon World testnet. We want to make sure potential validators are happy with the basic procedure of registering their validator node, delegating stake to it and then removing that stake.
In order to lay the groundwork for this we will be restarting the Beacon World network on Friday evening / Monday morning (because we need to adjust genesis parameters). (Edward)
Q: Would there be an interest for fetch.ai to have a service providing validator hosting as a service?
Ans: You mean us operating validators? Or delegating staking to a professional validator? Either way the answer is yes.
The validator-as-a-service is something that we might use for some of our industrial partners to give them access to end-points, etc, I would also generally recommend delegating for non-technical users. (Jonathan)
Ans: At least in the short term there is no plan to provide a sort of validator as a service platform. In general, that might lead to more centralisation.
The Fetch.ai foundation will of course run some of our own validators and community members are welcome to delegate stake to them. (Edward)
Q: Isn’t it advantageous to choose established staking providers, as they can guarantee a secure and stable operation of a node?
A: It’s true that there are some very skilled validators out there. We’re really keen to reward decentralization and individuals who can help us push forward the ecosystem. I expect that we will have some established validators as well. Ultimately, it’s a market and the choice is down to the FET token holders. (Edward)
Q: What is the anticipated amount of total stake? How many tokens will constitute 1% in your estimation?
Ans: That’s a good question, and it’s difficult to answer, because it depends on how many FET tokens migrate across the bridge and then what fraction is staked. I would anticipate it being small to begin with and growing over time.
Getting in early would be a good way for someone starting out in running validators to get established in the community. (Jonathan)
Q: Are there any parameters or gauges to ensure decentralization? Not of validator owners but the infra backbone they all sit on. i.e. if ALL sit on Cloud provider x it is not very resilient. What will be the interface users have to see validator metrics and sign up to specific ones? Will there be any external performance monitoring of the validators on the network? How will validator performance and uptime be observed?
Ans: It is a really good question, in practise we hope that getting a wide selection of validators will by its very nature, help to have a variety of underlying hosting providers.
It is, however, difficult to measure actually. I think the closest thing that we could do in the short term is probably a (voluntary) validator survey to try and probe that question a little bit more deeply. (Edward)
Q: What will be the interface users have to see validator metrics and sign up to specific ones?
Ans: So the Fetch.ai block explorer monitors some of the uptime metrics at a course level. It is also the main interface to review the list of active validators when making staking decisions. (Edward)
Q: I have two questions, how will you as Fetch and the community in general be able to monitor the “quality” or lack thereof of a validator, what metrics are you planning to use? Will there be a central “validator database” somewhere online for all to check and delegators to choose their validator from? And apart from popularity, stake size, how else will the validators be ranked?
Ans: Q1: The primary metrics that we use to monitor validators is based on if they provide a block in time. That is built into the protocol itself and monitored over a 250 block (~20mins) interval. If the validator in question fails to generate blocks consistently then they will become “jailed” (removed as an active validator) and they will have part of their stake slashed. This is used to provide a strong incentive for validator operators to make sure that their node remains online in an active state.
Q2. We envision the block explorer being the central hub people will use to look at the current set of active and inactive validators, commission rates etc. (Edward)
Q: Currently there are some limitations in ERC20 (e.g., sending FET directly to the FET contract address, I know, beaten by that myself), when we move to mainnet, how can we fool proof this mechanism as validators, so that when e.g. a delegator sends accidentally their FET to the wrong address we (as validators, assuming we become such, don’t get in any legal disputes with the delegators. In other words, is there a legal status under which validators are protected?
Ans: That’s a good question. We’re in the fully unpermissioned space so the validators do not have to disclose their identity and this makes it practically impossible for someone to take legal action against a specific validator. (Jonathan)
Q: How are validators selected? i.e., we are talking about a max of 100 validators — if we get to a theoretical point in the future where 100 exist — how do the 101 candidates try to get in the top 100? or is it the case that anyone CAN be a validator potentially without endorsement? (they just need to have the top 100 amount of FET)
Ans: We’ll be restricting numbers to begin with, but these are not set-in stone, and our consensus design should be able to support 1,000s of light nodes when it is implemented later in the year. https://www.fetch.ai/uploads/Fetch.AI-A-Minimum-Agency-Consensus-Paper.pdf (Jonathan)
Q: “If the validator in question fails to generate blocks consistently then they will become “jailed” (removed as an active validator) and they will have part of their stake slashed.”
What do you envisage as a time windows for this — fail to consistently produce in one 20 min period once constitutes penalty or something else? What would be considered as behaviour worthy of penalty in practical terms? Who defines this penalty or is there a specific framework and set of metrics that define it?
Ans: We have control (as a community) over the exact threshold. We are planning on being fairly relaxed with this threshold at the launch of the main net and we will use the governance tools to increase it over time as we build an established validator community. (we don’t want to cause slashes for honest mistakes especially at the beginning of the network). (Edward)
A (Jonathan): This also makes sense from a user perspective as a lack of availability does impact the throughput and finality of the network but this will only start to really be noticeable when we have significant numbers of transactions being processed. (Jonathan)
Q: If you would have to put priorities on the most important factors on HW requirements, how would you rank them in terms of importance: a) fast network, b) failover network connection, c) fast CPU, d) lots of cores, e) lots of ram, f) fast disks
Ans: Obviously we expect the usage profile to change over time and this will be especially true over the course of the first year I am sure. Given that this is my list of priorities for the validator
– Stable networking (fast is most useful in syncing) as a validator aliveness is basically the #1 priority
– RAM is the next priority, at least having 4GB per node.
– Disk performance is next although this is mostly going to be more of an issue the longer into the life of the network
– CPU — this is the thing that we expect to slowly build over time as the transaction rate increases however, for the launch this is the lowest priority. (Edward)
Q: I have seen reference to redundancy for supporting aspects but the node itself has a private key and ID. Maybe I am missing something but this in itself cannot be made fully HA? (I know the container or VM can be moved around hosts) In the case of OS corruption — a backup/restore is the best / only option?
Ans: Typically (if you were going to do this seriously), you’d have redundant HSMs for signing. This is a great resource: https://kb.certus.one/ (Jonathan)