“Luca, I am considering working in crypto; can I pick your brain?”

You work for a large tech or finance organization; perhaps you were an early employee at a promising startup that became too big for you or didn’t really take off. You love to keep up with technology and maybe have bought some bitcoin not to miss out; or perhaps allocated some capital to it and had to defend your investment with people who missed the boat and convinced themselves that crypto is too risky, not backed by fundamentals, or it’s too late. You are surrounded by news and blogposts about crypto and the mental energy dedicated to thinking about what’s next for you keeps pushing you towards crypto.

There’s very little doubt that the crypto revolution is amongst the most exciting forms of innovation our generation will witness and we can now agree that it’s moving much faster than people predicted just two years ago (perhaps the absence of crypto conferences in ’20 and ’21 shifted the focus on #buidling).

Many people reached out to me asking to pick my brain to evaluate whether jumping to crypto is the right move for them. Even though I expect to refresh the argument often in the months ahead, I want to share an exercise that will help you answer the question “Shall I work in crypto?“. It boils down to five questions:

What user do you want to serve?

While the vision is what fuels the energy to move forward, the ability to execute generally requires different levers. From the cloud to the financial system, crypto companies have been reinventing the primitives and the infrastructure of large traditional industries; pushed by the need to validate their products, they targeted early adopters and often motivated them with financial incentives to offset the risks and the friction.

As of now, crypto adopters can probably be described with these categories:

  • Active and professional crypto traders → who want to maximize their returns by performing sophisticated trades
  • Passive and unsophisticated crypto traders → who want to put their crypto to work without moving their money around
  • Fintech users → who want to buy and sell bitcoin and ethereum inside the apps they use
  • Blockchain Layer 1 companies → who want to attract usage, create more wallets, outsource some platform work to fill some gaps in their infrastructure, etc.
  • Blockchain Protocols and Dapps → who want to build distribution, attract users, and validate their incentive model
  • Blockchain infrastructure companies → who want to sell standardized products to solve clients’ recurring needs
  • Financial service companies → who want to increase their performance by upgrading their infrastructure
  • Enterprises → who want to solve problems specific to the vertical they operate in
  • Enterprises’ innovation centers → who want to explore new use cases and increase the perception of the company by running internal research projects

Each segment comes with a different set of challenges and influences the activities you will be working on on a daily basis; you want to make sure you are not only comfortable with but also positively challenged (someone would say ‘passionate’) about dealing with.

What challenges do you want to solve on any given day?

Blockchain is fascinating because it requires an understanding of technology, markets, incentives, game theory, legal frameworks, and product-market fit at the same time. You will have to have a point of view, understand and anticipate trends, solve problems that no one else has ever solved before. You will have to be comfortable making decisions without history and data, you will have to justify your ideas with stakeholders who may have different views and reads of the same situation. You will have to keep your team focused while the industry evolves but you will have to have the courage to kill what won’t work based on new information. You need to be ready to sustain an unexpected crypto winter.

You will have to solve very different challenges: one day you discuss what development framework you want to support, then jump on a call with legal teams to understand the European data regulation, then go back to unblock an engineer stuck on a data format of a transaction, and so on. Working in blockchain means being exposed to a variety of aspects that define what you learn and how you grow. My advice is to spend a lot of time thinking about this question.

What’s your contribution to the industry?

Blockchain evolves every day because the people who are in it push it forward; something is impossible until someone does it and becomes the new norm. But it is still very small relative to other sectors and operators know each other and talk to each other to figure out what’s next, which projects to support, what’s the next natural step for a certain topic. You become the point of reference for something inside and outside your company and you need to know 1) what you want to spend your energy on, 2) what you want to be known for, and 3) what you can do better than peers.

What’s your timing?

The Blockchain narrative comes from Venture Capitalists, Enterprises, and the Media.

Venture Capitalists have a 10-year vision, have the ability to wait a bit longer if they are wrong on timing, and are in a position to place different bets and profit only from the ones that work. As an operator, you don’t have this luxury: you have one bet which needs to work out in a reasonable timeline and your chances to place multiple/competing bets are really limited.

Enterprises make money with their core businesses and can afford to dedicate 1% to big innovations, especially when the market rewards exposure to it. If you use “enterprise adoption” as a metric, you’d better understand which enterprise, which adoption, from whom, why, at which cost, and how close to production the proof of concept is.

It feels superfluous to talk about the Media.

In short, it’s very hard to use someone else’s narrative to justify your own decisions. In the end, what you learned, the people you worked with, and the things you did day after day are the real value of your experience, which brings you to the previous questions.

What metrics do you want to own?

This is my favorite question. I purposely left it at the end because you will use it to validate the conclusion you got to by answering the previous questions. If you like the metrics you will own, bingo! If you don’t, you may want to reconsider your reasoning or even your intention to work in crypto. It goes without saying…you need to like the metric per se AND the realistic results you can obtain in the time frame. you established for yourself.

People get into crypto from very different angles and for many different reasons but your personal metrics for success matter, hence, the alignment between reality and your expectations will determine your own success. In an industry that evolves every day, in a World dominated by so much noise and so many opportunities, approaching crypto without knowing what to consider success is almost certainly a mistake you will regret. On the other hand, doing your homework will open you the doors to an experience you will tell your kids about.

Thanks to Justine Humenansky for the feedback. If you want to chat more, please reach out.

Programming Data and Money: The Data Market Yield

The ability to collect, store, program, and use data has brought connectivity, ubiquity, and intelligence to every modern application. The ability to represent and exchange money in a digital form is now adding the exchange of value to the stack. The ability to program money and data to obey a set of predefined rules feels like the next natural step.

A responsible data economy is based on two elements: confidentiality is one. Properly aligned incentives among final users are equally fundamental.

This post seeks to outline more on the future of programmable data and money and how the network can be used to enable users to earn a data market yield.


Breaking the data paradox with an incentive-based model

Fresh data has become a valuable tool for many companies — it can inform critical product decisions, improve personalization, enable user growth, and create better analytics. Alternatively, users don’t derive the maximum value from their data, in part because they aren’t offered easy, clear, and transparent ways to do so. The two sides of the negotiation, users and companies, differently price the data, leaving a gap in the equation.

This gap is the result of misaligned incentives: companies design products that collect data, while marketplaces and aggregators distribute it to third parties. The user — the data producer and owner — is left outside the value-capture cycle. This is problematic because this cycle influences the decisions that companies take at different stages of their growth. In the earlier phases, they are incentivized to attract their ecosystem of users/developers/etc. through cooperation. The tradeoff between value extraction and cooperation progressively skews toward value extraction when the company grows. Decentralization, crypto, and smart contracts can break this dynamic.

To truly develop a new model, we need to go beyond protocol-layer incentive schemes. If crypto networks’ ultimate revolution is to encourage coordination and compensate network participants, information exchange is obviously the lifeblood of this cooperation. Hence, incentivizing users to engage in (fairly compensated) data exchange, in addition to incentivizing protocol use, seems like an obvious component to achieve the vision of a decentralized and cooperative web (How many times did you hear…”why do we need a decentralized Facebook?”).

Programmability and Privacy

Blockchain creates protocol level incentives. Smart contracts create computers that can make commitments, introducing strong guarantees that code will continue to operate as designed without relying on humans or organizations. The Oasis Network also provides data encapsulation, that allows users to establish selective and consent-based computation on their information, and data privacy, which allows users to maintain ownership and confidentiality of the data even when the right to compute is granted to a 3rd-party.

Once data is encapsulated and controlled by users, programmatic access can be managed via smart contracts: each time an application needs to access users’ data, the smart contract enforces the specified policies and the blockchain records the transaction. Keeping the data private implies that data buyers can’t re-use the information multiple times after the initial acquisition. Each request triggers the execution of the smart contract, resulting in a programmatic, per-consumption, model.

Programmatically attaching money to the flow of data creates the possibility to establish a direct and repeated relationship between the data producer/owner and the data consumer/buyer, making it possible for companies to ‘continuously’ cooperate with users.

Image for post

Data Market Yield

The above model changes the data collection practice. Instead of finding creative ways to collect data and then hide the policy behind long T&Cs, applications can create incentive-based models that encourage data-based cooperation between users and applications. Users remain the owners of the capsules that contain their data and give temporary access to computation models that are allowed to perform specific tasks only. Data requestors lock an asset that serves as a payment for the corresponding usage of the data and the transaction is settled, automatically, without any negotiation to be done or legal contract to be established. The resulting consumption-based model allows data to generate a yield: ‘control your data, invest it, and earn a payout’.

How it works today

Image for post

How it works on a privacy-preserving platform such as Oasis Network

Image for post

Where we go from here

At Oasis Labs, we built APIs to abstract the complexity of blockchain. We partnered with some of the main players in DeFi to integrate their protocols and services on the Oasis Network. We are supporting and incentivizing companies and developers that align to the vision; Through our Dev Accelerator program and the Parcel SDK developed by Oasis Labs, the Oasis Foundation is focused on encouraging developers to start incorporating privacy as an incentive for the final user.

Controlling the data and earning a yield should be considered normal. Creating the capability is the first step. Raising awareness and showcasing the incentives is the following one. My next posts will be dedicated to discuss more about the partners we onboarded and why we decided to work with them. As always, blockchain is a collective effort and I would love to hear from you if you have feedback or suggestions.

Thanks to Dawn SongBennet YeeAnne FauvreJustine HumenanskyJames Hinck for reading this in draft form.

This post was originally published here

Contact Tracing & Privacy: come risolvere il Trilemma

Contact Tracing & Privacy: come risolvere il Trilemma

Per un Italiano che vive a 10,000Km di distanza, non è piacevole assistere al conflitto interno tra stampa, cittadini, governo e imprenditori, causato dalla (giusta) scelta di utilizzare “Immuni” come app di Contact Tracing. Oltre 4 milioni di risultati per una semplice ricerca “Immuni Privacy” su Google suonano come un’immensa onda di energia diretta nella direzione sbagliata, come dimostra il comportamento mostrato in un numero infinito di occasioni precedenti (app di incontri, condivisioni di foto, consenso all’accesso ai dati per giochi su Facebook, ecc.).

Contact Tracing Trilemma — Luca Cosentino

Tralasciando il piccolo sfogo iniziale, ho deciso di scrivere questo post per descrivere la nuova tecnologia chiamata “privacy-preserving computation”: si tratta di tecniche matematiche, studiate da anni ma rese utilizzabili nella vita reale solo recentemente, che permettono a un’applicazione di utilizzare i dati degli utenti senza però “vedere” i dati stessi. Proviamo a vedere come funziona:

Il problema: “Contact Tracing Trilemma”

Forzando leggermente l’esempio, ci si pone il problema di come risolvere il ‘Contact Tracing Trilemma’, cioè il compromesso che le tecnologie correntemente utilizzate impongono: la maggior parte dei modelli proposti non permette di avere sia il tracciamento della posizione, sia la tutela della privacy, sia l’analisi dei dati in tempo reale, ma consente di ottenere due di questi elementi in contemporanea (al massimo).

Contact Tracing Trilemma — Luca Cosentino

Seguendo i modelli ‘tradizionali’ di gestione dei dati, garantire la privacy vorrebbe dire limitare fortemente la condivisione della posizione o l’analisi dei dati in tempo reale; tracciare la posizione e garantire la privacy senza permettere l’analisi dei dati in tempo reale, renderebbe vano qualunque progetto. Una soluzione parziale per garantire un buon compromesso tra privacy e funzionalità potrebbe essere quella di avere più app allo stesso tempo (per esempio, una app diversa per ogni Regione) al fine di diminuire il possesso dei dati di una sola applicazione: tuttavia, come rivela la ricerca svolta dall’Italiano Luca Ferretti del Big Data Institute dell’Università di Oxford, almeno il 60% di una certa popolazione deve utilizzare la stessa app, affinché si arrivi ad uno stato di “epidemic control”.

Decentralized Privacy-Preserving Computation

Alice vuole sapere l’età media di tutti gli studenti presenti al suo corso privato di matematica. Alice ha due possibilità: 1) chiedere l’età ad ogni studente o 2) chiamare Roberto, il quale, chiedendo l’età ad ogni studente, riferisce solamente la media ad Alice.

Caso 1) Alice conosce l’età di ogni studente

Caso 2) Roberto conosce l’età di ogni studente, Alice conosce solamente la media. Roberto promette di non rivelare i dati a nessuno.

In entrambi i casi, gli studenti devono fidarsi del fatto che Alice e/o Roberto non utilizzeranno i loro dati per altri scopi in futuro: sarà però estremamente difficile per uno studente verificare che il dato non sia stato rivelato, diciamo, 5 anni dopo.

Contact Tracing Trilemma — Luca Cosentino

La Decentralized Privacy-Preserving Computation permette di risolvere questo problema: i dati vengono inviati ad una serie di scatole nere interconnesse e programmate in maniera tale da eseguire calcoli sui dati che vengono inseriti al loro interno, senza mai rivelare i dati stessi a colui/colei che richiede il risultato: il ‘Cloud sicuro, privato, e distribuito (decentralizzato)’ creato da Oasis Labs, la startup presso la quale lavoro a San Francisco, è un esempio di questa tecnologia.

Riprendendo l’esempio precedente, gli studenti potrebbero indipendentemente inviare il loro dato (la loro età) alla piattaforma, avendo la garanzia che nessuno (nemmeno Oasis Labs) possa avere accesso al dato ma solo al risultato del calcolo.


Contact Tracing Trilemma — Luca Cosentino

Seppur non del tutto corretto, l’esempio mostrato spiega come avviene il calcolo sui dati privati, o Privacy Preserving Computation. Cosa vuol dire “Decentralized”?

Se nessuno ha accesso al dato, bisogna trovare un modo per verificare che il risultato ottenuto sia corretto e che comportamenti maligni vengano bloccati: attraverso la distribuzione/decentralizzazione, il meccanismo di verifica è reso altamente disponibile, in modo che nessun malintenzionato possa nascondere le sue azioni illecite. In sostanza, la distribuzione elimina la necessità di fidarsi di ogni singolo ‘validatore del calcolo’, garantendo la confidenzialità del dato e la corretta esecuzione del calcolo.

In termini leggermente più tecnici, le chiavi per decriptare i dati non sono in possesso di individui ma solo di algoritmi autorizzati dagli utenti stessi solo per il fine stabilito: in aggiunta, tutto quello che avviene sulla piattaforma viene registrato in un database indelebile.

Come questo potrebbe aiutare con la situazione attuale dovuta al COVID?

Se sostituiamo l’età dell’esempio riportato sopra con il dato che traccia la nostra posizione, è abbastanza intuitivo capire come questa tecnica permette di risolvere il Trilemma mostrato sopra:

sarebbe quindi possibile garantire la privacy senza sacrificare la geolocalizzazione e la possibilità di analizzare i dati in tempo reale. L’utente avrebbe il completo controllo del proprio dato, la possibilità di decidere a chi darne accesso e di verificare l’effettivo utilizzo.

In aggiunta, gli utenti potrebbero decidere di dare accesso selettivo ai propri dati (sempre mantenendo il dato nascosto) a Governo, ricercatori/ricercatrici, medici, ecc, i quali potrebbero utilizzare questi dati per creare modelli predittivi.

Entrando più in dettaglio, una possibile implementazione del modello sarebbe la seguente:

  • Quando Alice e Roberto si incontrano (in un certo raggio di distanza), i loro telefoni scambiano un ‘token’ tramite Bluetooth
  • Roberto va a testarsi in un ospedale e risulta positivo; un codice criptato ma univoco viene associato al risultato del test e inviato dall’ospedale a un server (in realtà i server sono due per non permettere a nessuno di riconoscere l’utente e associare lo stato dell’infezione)
  • Il telefono di Alice controlla se i token delle persone che ha incontrato esistono su questo server che contiene la lista dei token ‘infetti’
  • Trovando quindi il token di Roberto, il telefono di Alice avverte Alice stessa che potrebbe essere stata esposta al virus

Il sistema Epione, sviluppato attraverso una collaborazione tra l’Università di Berkeley, la National University di Singapore, e Oasis Labs, propone il modello appena descritto.

Un ulteriore vantaggio del modello Epione rispetto ad alcuni dei modelli esistenti è la prevenzione contro le dichiarazioni di falsi-positivi: un utente malintenzionato potrebbe infatti falsamente dichiarare di essere stato testato positivo. Ciò diffonderebbe false informazioni e spaventerebbe altri utenti e ridurrebbe la fiducia nel sistema.

Conclusione

Spero che questo post, per quanto pieno di inesattezze tecniche volute al fine di semplificare il messaggio, contribuirà all’evoluzione del prodotto . La speranza più grande è che la continua lotta tra le parti finisca nel migliore dei modi per favorire una ripartenza veloce e sostenibile.

Un grande in bocca al lupo a Bending Spoons, il vostro talento ed il vostro successo sono un bene prezioso per il nostro Paese.

Luca Cosentino
luca.cosentino@berkeley.edu

Contact Tracing Trilemma — Luca Cosentino

Why Visa acquired Plaid

Why Visa acquired Plaid

Photo by Brooke Cagle on Unsplash

It seems clear that Plaid’s revenue potential is only a (minimal) part of the reasons behind the acquisition: the 2x price/valuation is probably more justified by the desire to either protect the core business or keep competitors away from the new business.

Plaid in numbers:

💵 investments to date: $353.3M

💰 revenues: $150–250M

🤓 valuation prior to acquisition: $2.65B

🥳 acquired for: $5.3B (2x the previous valuation)

🤝 provides connection for 80% of the largest US FinTech apps

👨‍👨‍👦‍👦 200M accounts linked (115% CAGR since 2015)

👩‍💻 2,600 developers

#1: Plaid is facing risks (that Visa can solve)

A crucial component of Plaid’s business is the ability to get access to users’ bank accounts and scrape the data on behalf of 3rd-party apps. Thanks to this capability, users are offered an enhanced experience that materializes in transparency, easiness, financial advice, etc.

Two problems arise: 1) banks are complaining about the poor security of this practice and 2) users lose control of their data without realizing it.

Visa can leverage more solid relationships with bank partners, which can lead to the development of a safer authentication system / data acquisition system than what Plaid has built so far.

#2: Data

Despite some limitations, payment networks such as Visa and Mastercard are huge data aggregators as they store data of millions of transactions per day. The biggest limitation is that they only see (parts of) the transactions that happen within their network. Plaid acquisition would potentially enable Visa to see the rest of the picture in which Plaid is currently used by all those apps that reduce the need of Visa such as Venmo or Square Cash, as well as non-Visa transactions of a bank account.

Despite this is very tempting, Visa will have to be really careful because Plaid’s clients won’t be keen to let Visa see/use their data, hence creating the case for switching to Plaid’s competitors.

#3: Expansion

In the US, Plaid’s penetration in key FinTech verticals is between 2 and 7%. This represents a win-win situation for Visa: on one hand, Visa’s relationships will help Plaid sign more clients; on the other hand, Visa will expand its product offering and its addressable market.

Visa will also push Plaid’s expansion outside the US: in fact, there are ~15x more fintech users in International markets than in the US…and Plaid’s penetration in these markets is really low.

#4: Payments

But the most interesting use case is the potential development of online payments. Visa is a middle-man, and its value lies on the ability of creating a standard for merchants and buyers to exchange digital money. While the card is a very practical tool in the offline world, especially with the advent of faster technologies such as contactless, it’s generally a strong friction point in the online commerce. That’s why companies such as PayPal, Venmo, Amazon, Visa itself are trying to simplify the process pre-saving the card and letting users pay through a simple authentication. A PISP (Payment Initiation Service Providers) is a 3rd party which enables a payment to be authenticated and paid directly out of a persons or businesses bank account rather than a debit or credit card. A company like Plaid strongly simplifies this process as it helps the user pay through her bank simply with username and password — an easier process than PayPal as there would be no need to create another account on top of the one the user already has with her main bank. The rise of a ‘Pay with your bank account’ system would allow merchants to save on credit card fees without creating any friction for the user, while also reducing the risks of credit card frauds. Plaid’s acquisition would give Visa enough visibility of the new trends, allowing the company to anticipate competitors’ moves while protecting the business model.

London DeFi Summit: Key Takeaways

my presentation in the Main room on Tuesday

London DeFi Summit: Key Takeaways

Last week we sponsored the DeFi Summit in London; 300 DeFi enthusiasts, developers, fund managers, investors, and founders got together at Imperial College to discuss the status and the development of Decentralized Finance. Cambrial and Semantic did a fantastic job in coordinating ~60 sessions across 4 rooms, with speakers and participants from all over the World.

This article is a highlight of my top takeaways from the conference as well as a recap of what we announced.

We announced our Open Finance Developer Kit

Oasis Labs and the Oasis Network can help expand adoption of DeFi and Open Finance. We believe that the next generation of Open Finance dApps will be powered by Privacy, Scalability, Composability, and Identity: we announced the launch of our new toolkit that will “empower developers to write dApps that keep data confidential while simplifying the integration of Open Finance protocols, primitives, and services”.

For those who missed the event, we recently published a blog post with our view.

my presentation in the Main room on Tuesday

The need for a privacy-preserving version of Compound or Fulcrum

Lending currently accounts for roughly 85% of the volume in DeFi (source: defipulse.com) and virtually all of it runs on Ethereum. This means that wallet address, function, amount, fees are all public.

But this leaves a big gap when you compare it to how traditional financial systems run efficiently — participation of institutions or individuals in a certain financial activity is intentionally considered highly confidential and hence kept private. Why should it not be the case in Open Finance?

As an example to show the relevancy of privacy, we proposed the architecture of the two most common function of a privacy-preserving version of Compound. In the current model (left side), the method (mint or borrow) and the amount (payload) are both publicly available. In the proposed model (right side), these pieces of information are encrypted and not revealed to the market.

my presentation in the Main room on Tuesday

Wallets

The integration between DApps and wallets, powered by a user-friendly UI and guaranteed by the security provided by the integration at smart contract layer, is laying the foundation for a much better user experience. In particular, we recommend checking out what Argent, with its user-friendly and secure wallet, and Gnosis, with its multi-sig custodial wallet, are doing.

Insurance

Security is concern #1 for investors: if I invest my money, I want to make sure I don’t loose it because of bugs, technical issues, or fraud. Given that investors can’t verify lines of code or sources of information such as oracles and auditing firms don’t have a strong reputation yet, insurance becomes extremely important to convince liquidity providers to route their investments to Open Finance products.

An interesting model has been proposed by Nexus Mutual: they advocate for the creation of a fund that covers against smart contract failure leveraging members contribution only and using it to establish validity of claims.

Identity

User verification and KYC are ways to prevent system attacks and comply with regulations. Onfido, an identity provider, is on a mission to make identity portable and it’s piloting an on-chain deployment to store all the cryptographics info to ensure the non-tampering of credentials and the secure communication between parties.

What still remains pending is how to ensure completeness of the user’s financial history. As we anticipated in our introductory article, identity also means completeness of the user’s financial history in order to create a reputation system that enables the shift from over-collateralization to under-collateralization.

my presentation in the Main room on Tuesday

Oracles

As highlighted by the UMA protocol, a decentralized oracle must work in order to build truly decentralized and scalable financial smart contracts. Bridging on-chain and off-chain sources of data becomes an imperative to create real-world alike services in Open Finance.

The Provable team is working on making oracles easy to access, blockchain agnostic, and secure.

Gaming

A very interesting point of view has been brought by the DappRadar team: in their view, gamers are the most obvious audience for Open Finance. The demographics of gamers (Millennials and Gen-Z) is the main reason behind their view: people in this age range are already used to digital tokens and it’s just a matter of time before these assets will become tradable on some markets: for instance, it will be possible to take a non-fungible token from a game, wrap it into a tradable token, and then use this token to buy more non-fungible tokens.

Open Questions

Although the conference highlighted a lot of excitement and positivity around the next steps in Open Finance, some questions remain open and top of mind for the audience. I captured some of them here in case they serve as an inspiration for future collaboration and research.

  • Is it really possible to build a truly decentralized financial system if contract owners can upgrade the contracts themselves?
  • For organizations that rely on a voting system, there is a need to create emergency programs. However, how can we trust users to vote in favor of solving a fallacy? If that decision needs to be validated, how can wait for, ie, 2 weeks for this to happen?
  • What are the compliance, stability, and scalability implications of bringing Open Finance to traditional institutions?
  • If we compose multiple protocols in an Open Finance dApp, whose responsibility is going to be if something goes wrong (ie, a bug in one protocol)?

New to Open Finance? Some resources you may find useful

New to Oasis? Some resources you may find helpful

Privacy is critical to mass adoption of Open Finance

Privacy is critical to mass adoption of Open Finance

How Oasis Labs and the Oasis network can help expand adoption of DeFi and Open Finance

Open Finance applications are primed to reinvent the financial system — pushing it towards a design that relies less on status, wealth, and geography and more on a set of programmable conditions that have the benefit of removing the subjectivity and bias that cause high costs, risks, and inefficiencies.

Note: given that Decentralized Finance (DeFi) is a generic term that signals a series of products that are meant to be accessible to everyone, the term Open Finance seems to better serve the mission.

In the last year alone, we witnessed how the first generation of Open Finance has provided the market with a huge number of protocols and primitives that are meant to support specific pieces of this new financial system: the ~$500 M currently locked in Open Finance, up from $1,400 in September ’17 (source), is a material signal that Open Finance is growing in attention and adoption.

But in order to truly overtake traditional financial systems, Open Finance needs to take significant steps to improve how it addresses privacy, scalability, composability, and identity. At Oasis Labs, we believe our platform has the unique characteristics required to solve these issues.

Privacy

Over the past 3 months we’ve talked to 35+ companies in the Open Finance space, and the consistent feedback we’ve received is that the need for privacy and confidentiality is imminent and there is no viable platform out there at the moment, but a solution has yet to be developed to address this need. It’s fairly easy for example to expect consumers to demand that their banking transactions will be kept private, their wealth hidden from unauthorized parties, and their identity only available for specific calculations.

Compared to the traditional system, Open Finance is based on the idea that parameters (such as interest rates) are rebalanced in real time as a function of the new data (such as supply and demand): the ability to collect inputs from multiple sources, secretly compute over them, and release the output of the computation only will soon have terrific impact in segments such as decentralized exchanges, lending, trading, payments, scoring, and collateralization.

A few examples of how the Oasis Labs privacy-preserving development platform could be useful:

  • In Collateralized Lending, privacy can be an enabler: the privacy-preserving version of a Collateralized Debt Position (CDP) can keep transactions private from whoever is not involved in the transaction, protecting the information around the actual participation of a company or individual in a system, and preventing manipulation such as front-running.
  • In Dark Pools, privacy can be a game-changer: money managers can protect their trades from the orderbook, hence preventing the competition from copying the strategy or the trade price to move as a result of the incoming signal.
  • For Stablecoins, privacy can solve the traceability-fungibility problem: a private stablecoin would enable businesses to protect their interests and relationships, which is key to large-scale business adoption.
  • Confidentiality of computation will protect developers from compliance issues: with more power moving from legal agreements to lines of code, the execution of programs within a secure and private environment will protect developers against potential compliance problems because of the guarantee that the data is exposed only to whoever is granted access to it.

Scalability

Despite some attempts, the current Open Finance environment is bounded by the performance of Ethereum, which compares really poorly to the traditional financial system (Ethereum currently supports ~15 transactions per second compared to, say, the 2,000 processed by Visa).

In a space where thousands of transactions are submitted each second for the reasons mentioned above we believe that Open Finance has an opportunity to do better than the ‘pending transaction’ of some current systems. Oasis Labs’ unique architecture improves scalability from both a throughput and complexity standpoint, allowing Open Finance applications to privately (if needed) compute complex transactions in real time.

Composability

Composability is at the center of Open Finance applications: thanks to the integrations on our Open Finance development platform, developers will easily be able to create applications that combine multiple protocols that work together as pieces of a puzzle.

An example could be a decentralized version of Wealthfront (the all-in-one solution that helps you earn more interest on your cash, get advice on how to manage your savings and automate your investments) that allows users to submit USDs, convert them into crypto, earn interest through Compound and DyDx, convert back to USDs and transfer principal+interest back to the main bank account.

Another important aspect of composability is the ability of moving liquidity across multiple protocols: borrowing a collateral on one platform, use the liquidity to open an interest-generating position somewhere else, move the position to the protocol who pays the highest interest rate, managing the entire process from a user-friendly interface that allows users to perhaps deposit fiat, convert it automatically into crypto and cash out back to fiat when done.

Oasis Labs wants to make this process as easy as possible for developers: our Open Finance platform will allow full portability of Ethereum-based applications. Additionally, we are partnering with the main protocols, primitives, and services to integrate existing capabilities in our development environment.

Identity & Reputation

The lack of an identity system that ensures completeness of the user’s financial history in addition to KYC and the high volatility of the market are the reasons behind the high over-collateralization of assets required by the Collateralized Lending protocols, which creates a high barrier for mainstream adoption.

A reputation system that is immune to Sybil attack needs identity to be tie in and hence processes involved need to be run privately and securely.

Despite the fact that we agree that proof of attestation is the first necessary step, our beliefs rotates around proceeding step-by-step on a path that leads to a reputation system: in our view, the attestation should be shared across 3rd parties to then verify how good the external verification is; this will allow the production of a metascore which will then be used to form a reputation system.

Just like every other building block, Identity needs to evolve to become decentralized and programmable: decentralized, as in user’s identity is curated via multiple sources and comes together to serve the user without compromising privacy.

Programmable in the sense that both users’ and developers can interact with identity just like they do for other software services.

Once Decentralization and Programmability are achieved, we would be able to create stronger identity systems that can provide reputation for individual users, that is impossible to achieve with current systems.

Given the fundamental role of privacy in the creation of a strong identity and reputation system, we are actively researching into how to augment existing solutions or to produce our own one.

Final thoughts

It’s with a lot of excitement that Oasis Labs is approaching the next few months. From integrating multiple protocols to working with the main Open Finance dApps, we will be working to create the best possible development environment to enable developers to bring Open Finance closer and closer to mainstream adoption.

Work with us!

We are partnering with the main Wallets, Protocols, and other services to build the most comprehensive platform for Open Finance. If you are interested in making yours available or to leverage our platform to build your DApp please reach out to Luca at info@oasislabs.com.

New to Open Finance? Some resources you may find useful

New to Oasis? Some resources you may find useful

Life lessons I like or wish I knew before

Life lessons I like or wish I knew before

A constantly-updated list of life lessons I wrote myself or found somewhere else (in no particular order)…because…best teachers are not always found in a classroom

Photo by Aaron Burden on Unsplash

In constant update:

Life lessons I like or wish I knew before

A constantly-updated list of life lessons I wrote myself or found somewhere else (in no particular order)…because…best teachers are not always found in a classroom

In constant update:

Everything you say and do creates an impact

You’ll never get a day off from your responsibilities

Your beliefs become your thoughts,
Your thoughts become your words,
Your words become your actions,
Your actions become your habits,
Your habits become your
values,
Your values become your destiny
(Gandhi)

On your last day on earth, the person you became will meet the person you could have become

If you’re brave enough to say goodbye, life will reward you with a new hello (Steve Jobs)

One person’s ceiling is another person’s floor

My desire to improve everything often destroys the moment

The second time you make the same mistake is a choice

Our compensation and fulfillment at work are made of Salary, Brand Reputation, vain metrics (ie, face time), and Results.
Pick the environment where you can focus on Results.

Never fuck over someone who cares about your well being.
It shows that you have no principles

We live in a world where thinking rationally, eating healthy, having a vision and working for it, getting exercise, not drinking is almost a counterculture, an act of defiance

We should prioritize trustworthiness, because ultimately, that is how we will make a real contribution to society

Why platforms (and relationships) fail:
(1) mispricing on one side of the market
(2) failure to develop trust with users and partners
(3) prematurely dismissing the competition
(4) entering too late.

(here)

Create the best possible operating standards, develop the character of your players, develop the culture of your team and the score takes care of itself (Legacy)

No one is bigger than the team. One selfish mindset will infect a collective culture (Legacy)

Just because it’s common sense doesn’t mean it’s common practice (Legacy)

Your thoughts become your words, your words become your actions, and your actions dictate your destiny

The ability to focus on what feels painful & effortful until it gets pleasurable is the main difference between being good and being great at something

Relaunching the Berkeley Entrepreneurs Association: introducing Berkeley StEP

Relaunching the Berkeley Entrepreneurs Association: introducing Berkeley StEP

By Luca Cosentino and Adam Brudnick

The unlocked potential of UC Berkeley is astonishing.

It doesn’t take long to be captivated by the basement of the Moffit Library, the place where students transform long CS group assignments into fun projects. When students get together the potential is massive: if you don’t believe me, I encourage you to attend the Mobile Developers’ App Fair or Cal Hacks to see how much talent UC Berkeley produces every year (hint: >10x as much as what you think).

But even with all this talent, things aren’t perfect. The challenge is that too often, lines of code or customer acquisition plans die within the walls of a class assignment or a research project, without ever seeing the light of day.

This makes it almost impossible for other students or potential investors to get to know the project or the people behind it, and that keeps these potential unicorns imaginary. Even more, the severe decentralization of Cal’s startup ecosystem increases the perceived barrier of an entrepreneurial experience, resulting most ideas being abandoned before they’re given their fair shake. Surprisingly enough, this is equally common for undergrads and grads, whether they’re in the technical or business communities.

If you’re reading this, there’s a good chance you’ve thought about starting a company but gave up because you didn’t think you had what it took, couldn’t find the right team, weren’t sure about the idea, or couldn’t figure out the next step.

We’ve been there, and we want to help. At BEA, we feel the responsibility of helping students become entrepreneurs. That’s our core mission, and that’s why we’re so excited about what we’re sharing today.

We are proud to announce Berkeley StEP, the largest-ever, pre-acceleration, program for Entrepreneurship at UC Berkeley

UC Berkeley students can apply whether or not they have a team or an idea; during the team formation period, applicants will be matched with potential co-founders with complimentary skill sets and will go through a period of 10 weeks in which you will ideate, prototype, and present your MVP, while laying the groundwork to get ready for pre-seed funding from the likes of, ie, Dorm Room Fund, the House, Skydeck… making it your very first “step” into entrepreneurship.

Why StEP?

👩‍❤️‍👩 Find team members to pursue an idea
🖇️
Learn the process of determining product-market fit
🏆
Become a founding member of an early stage company
💪
Get paired with an Industry Mentor for real-time feedback
🙌
Access an incredible network of Angel Investors, VC’s, and Berkeley Faculty
🥇
Preferred access to local accelerator programs and pre-seed investors
Access startup resources such as cloud credits, discounts

…..without giving up any equity

The program is supported by technical and business students clubs, VCs, mentors, entrepreneurs, and professors across UC Berkeley. We strongly believe that diverse teams can find the most innovative solutions, and that’s why we’ve eaten our own dogfood by building this program with input across the Berkeley ecosystem.

You can find more information around the program, curriculum, resources, prizes, mentors and supporters on the StEP website.

Our first batch will include a very limited number of spots, so we highly encourage you to apply ASAP. Applications will be reviewed on a rolling basis.

🚨🚨
If you are interested in attending an info-session at UC Berkeley, register here

🔥🔥
If you are interested in being involved, becoming a mentor, or supporting the program, please feel free to reach out to bea@berkeley.edu.

But wait, there’s more: the launch of StEP is happening alongside a full-on reboot of the Berkeley Entrepreneurs Association:
– check out our new website
– get in touch with our new team
– join our re-launched facebook group
– subscribe to our Medium publication

Btw….did you know that Shazam, Lime Bike, Blue River, Beats, Apple, eBay, and Intel are only some of the companies started by UC Berkeley founders? UC is the 2nd most entrepreneurial school in the world… together, lets make it #1.

#gobears

Data Scientist to Product Manager? It’s not a common path, they say

Data Scientist to Product Manager? It’s not a common path, they say

Why data science makes great product managers.

You spent a few years crunching data, analyzing information, going deep on customers’ behavior online, offline, omnichannel, mobile, in store, and every possible intersection of those. You are constantly under pressure, deadlines seem to run fast and in your opposite direction. Your Python and R are full of packages and libraries, and your folders contain plenty of v1, v2_lc, …, v11_final_lc (thanks Google for solving that, btw).
Your shoulder hurts for the many pats you were given for your valuable contribution to the team, you have just won a $20 movie card with free pop-corn or even an Amazon voucher.
You finally realize that, while you have acquired a unique skillset, you have learned how to navigate complexity, and you have trained your thought process, you have never been in control.

You start your research, talk to people, and read a bunch of articles, until you realize that your background is incredibly appropriate for a career you had never thought about before: Product Management.

In fact, Data Science makes great Product Managers.

Let’s see why:

Data Scientists and Product Manager use data to inform their decisions.

Data Scientists’ bread and butter is data; they analyze large quantities of information, synthesize it in a few key points, and suggest decisions based on their findings.

Product Managers are obsessed with shipping features their users love. Users’ needs and feature development are based on surveys, research, tests, data.

Data Scientists and Product Managers present their findings to their stakeholders and seek consensus.

Data Scientists present their findings to their audience; regardless whether they work for internal or external clients, they always have to sell their story to their audience and, guess what, they use data to do so.

Product Managers, by definition, coordinate multiple stakeholders at the same time and won’t be successful unless they are able to convince everyone. Data is PM’s friend.

Data Scientists and Product Managers work cross-functionally.

Data Scientists work with and for a number of teams within the organization; they may supply information to clients, client managers, product managers, finance team, and strategy group.

Product Managers work with designers, engineers, market researchers, finance, and product leaders. Everyone generally knows what everyone else is doing and who is the point of contact for specific topics.

Data Scientists and Product Managers see success as a team achievement.

Data Scientists may do most of their work independently. They are often given a task and they execute on it. Successful Data Scientists, however, need the big picture and this often requires creating relationships with the broad team. No matter what, every achievement will never be the result of their sole work.

Product Managers may have the best intuition or create the best mockup; however, success depends on execution and execution requires the entire team to be involved in the process.

Data Scientists and Product Managers have to prioritize.

Data Scientists are overwhelmed with an infinite amount of data; their analyses can take multiple directions and they can always go one level deeper. Hence they are constantly forced to prioritize.

Product Managers are overwhelmed with options; the user-centric approach always leads to a number of different options and the only way to succeed is to focus on what transforms into the impact they are seeking.

Data Scientists and Product Managers need to know the market they work on.

Data Scientists may know every analysis technique but they always have to explain their findings in the context of the market they work on while keeping their eyes open to spot important details.

Product Managers need to know everything about users, competitors, potential entrants, customer journey, and industry-specific dynamics. They have to put themselves in users’ shoes, while maintaining a fresh and unbiased perspective.

Data Science is a phenomenal school for aspiring Product Managers.

Bring this argument to the table next time someone you are seeking advice from tells you “I don’t know anyone who transitioned from Data Science to Product Management, it’s not a common path”.

Machine Learning: where to start from

Machine Learning: where to start from

The particular application of Machine Learning to business is becoming more and more popular. I guess what really convinced me to spend time on Machine Learning is exactly this fascinating intersection between technology, algorithms, statistics, and business: very rarely, an innovation has found ground in business as quickly as ML.

This article is meant to be a one stop for beginners in the field: instead of reinventing the wheel, I tried to collect resources I found useful on my path so far.

Some preliminary readings:

How to get started

Photo by Franki Chamaki on Unsplash

Top ML Algorithms — must know

Stay informed: website and newsletters

Wanna start something yourself?

In Conclusion…

There is no doubt that ML is the present and the future of many industries. In the next months, I will be analyzing the impact that ML can have on traditional businesses: specifically, I’d like to investigate how traditional companies can compete with more modern entities that are born leveraging ML and data from day 0 (ie, Amazon vs traditional retailers). If you’d be interested in participating/supporting, please do not hesitate to get in touch!

Photo by Franki Chamaki on Unsplash