Comment on page

# Technical Whitepaper 2.0

Updated: May 22, 2023

This whitepaper is tailored to technically proficient readers interested in integrating zkPass into their development stacks and who want to understand this technology’s reasoning and strategic decisions. It aims to provide insight into the conceptual frameworks that drive zkPass, making it valuable for developers involved in component creation, tool development, or building businesses around zkPass within the broader ecosystem.

In this whitepaper, we introduce the zkPass protocol, an innovative cryptographic structure leveraging the power of three-party Transport Layer Security (TLS), Multi-Party Computation (MPC), and Interactive Zero-Knowledge Proof (IZK). The ubiquity of HTTPS has facilitated users to access private data with robust end-to-end confidentiality and integrity. However, a shortfall exists in that they cannot inherently substantiate to third parties that the data originated from a specific website. To rectify this, we propose the zkPass protocol. This empowers users to convincingly demonstrate that data accessed via HTTPS was obtained from a particular website, asserting statements about this data within a zero-knowledge context, thereby preserving privacy and averting the exposure of sensitive information. Serving as a conduit between Web 2.0 and Web 3.0, the zkPass protocol liberates private data from the confines of centralized web servers, paving the way for seamless migration into a decentralized era.

• Prover(P): or user, or client who is required to generate proof for their statement.

• Verifier(V): or platform, which is responsible for verifying the Prover’s statement.

• Node(N): zkPass Node, is a part of the protocol execution used to verify the authenticity and integrity of the Prover’s private data.

• DataSource(S): stands for https server, is the trusted data source that owns the Prover’s data.

In the traditional data validation and confirmation process, the Prover submits their information to the Verifier. The Verifier, in turn, retrieves this data and performs authentication checks in collaboration with the DataSource. Thus, the Verifier serves merely as an intermediary or broker in this model.

Each party faces unique challenges in this scenario: for the Prover, there’s a risk of disclosing excessive personal information; for the DataSource, while it’s a trusted data provider, it’s incapable of offering personalized verification services; and for the Verifier, they’re privy to all the customers’ private data, gaining total access, which presents significant potential risks of data leakage.

We propose a new approach that repositions these three entities, positioning the Prover between the Verifier and the DataSource. Rather than the traditional method, the Prover uses their access token to directly locate and retrieve their data from the DataSource, subsequently generating a Zero-Knowledge Proof (ZKP) for the Verifier to check. This process ensures the Verifier remains unaware of the Prover’s personal information.

In order to implement this structure, we incorporate 3P-TLS, MPC, and IZK technologies.

Transport Layer Security (TLS) is the secure protocol for HTTPS, supported by almost all DataSources. It is a two-party protocol designed for client/server structure. We have built the 3P-TLS protocol based on elliptic curve DH protocol and combined it with MPC and Oblivious Transfer(OT) to prevent cheating.

One challenge we face is that the Prover may forge proof information provided by the DataSource to deceive the Verifier. The solution to this problem is through MPC, where both Prover and Verifier hold half of a session MAC key, which is a data integrity key used to maintain data integrity. Since the Prover cannot forge or tamper with information responses provided by the DataSource, and it cannot deceive the Verifier. MPC can also prevent the Verifier from knowing any private information about the Prover. During MPC handshake, there is no encryption key (data confidentiality key) for the Verifier, so it cannot decrypt any data and therefore does not know any private information for the Prover.

After obtaining information from the DataSource, Prover needs to prove certain statements of the response in a secure and private manner. We use Zero-Knowledge Proofs(ZKP) to achieve this goal. More specifically, we use interactive commit and prove zero-knowledge proofs(ICP-ZKP) to deal with the large scale of circuit. Prover gives the commitment as

$d = commit(m, r)$

, where $m$

is the message and $r$

is the randomness. Then Verifier check $T/F= verify(m, r, d)$

. $d$

wouldn't reveal any information about $m$

, and Prover can't find different $m$

and $m'$

(and $r$

and $r'$

) such that $verify(m, r, d) = verify(m', r' , d) = T$

. During the protocol, Prover needs to commit its private witness, then prove the input wires and the computation of gate one by one, and continue to commit the output wires until the final one.TLS is one of the most widely used protocols for secure communication over the Internet. It encrypts data from plaintext to ciphertext and vice versa, providing data security and privacy by encrypting traffic to prevent sensitive data from being leaked by third parties. The process consists of two sub-protocols: handshake and record layer. The goal of the first sub-protocol is to negotiate a secure key between two endpoints, while the second uses an agreed key to protect communication.

We refer to the endpoint initiating the connection as the ”client,” while the term ”server” denotes the responding endpoint. Additionally, we define the ”transcript” as the comprehensive record of all messages exchanged over the connection up until the point of analysis. The Handshake protocol itself can be divided into three distinct phases: the key exchange phase, the authentication phase, and the key derivation phase.

The protocol is started by the client sending a ClientHello to the server, containing the cryptographic algorithms supported by the client. The server replies with a ServerHello, containing the cryptographic algorithms selected from those offered by the client and a Diffie-Hellman public key. When the initial exchange is concluded, the two endpoints perform a Diffie-Hellman key exchange using the keys specified in the Hello messages.

Starting from the server, the endpoints exchange their certificates (ServerCertificate) and a signature on the transcript using the key specified on them (ServerCertificateVerify). Finally, they send an HMAC on the transcript using the MAC keys obtained from the key derivation (ServerFinished and ClientFinished). The client uses the first key, the server uses the second one. The endpoints verify all the signatures and the MAC provided by the other endpoint. If any check fails, the connection is closed.

At the end of the Handshake, the endpoints feed the new messages of the transcript into the key derivation function. The key derivation function maintains an internal state, therefore all its outputs depend on the The key exchange phase. At the end, the parties obtain application keys(encrypted keys/MAC keys). From that moment, the Record layer protects the client-toserver flow through those keys. As it was proven in the authentication phase, all the values output by the Handshake are computationally independent, even slight modifications in the transcripts input in the key derivation function would lead to completely different and unpredictable outputs.

To illustrate the 3P handshake, we first assign specific roles to the three parties involved: the Prover (P) and Node (N) jointly act as the Client, while the DataSource (S) assumes the role of the Server. This handshake process is based on the fundamental principle of Diffie-Hellman key exchange. Given the Client’s public key, all participating parties should be capable of computing a secretsharing of the exchanged secret. It is crucial to emphasize that if any party gains access to the exchanged secret, it could compute the symmetric keys and engage in unrestricted communication with the client. Therefore, to design a secure multiparty Diffie-Hellman protocol, it is essential for the parties to possess a secret-shared private key. It is worth noting that the public key does not require confidentiality. The protocol proceeds as follows:

The Record layer provides fragments as application data, which signifies the actual communication between the two endpoints. User-sensitive information is contained within this application data, usually in the form of HTTP requests/responses, and is encrypted/authenticated using the application keys. The security of this system is maintained if each party keeps their respective application keys private. This ensures the protection of user data throughout the communication process, minimizing the potential risks of data leakage and unauthorized access.

TLS is performed on streams, and the data in those streams are put into one or more fragments (of max 2^14 bytes). Client message boundaries are not preserved in the record layer (i.e., multiple client messages of the same ContentType MAY be coalesced into a single TLSPlaintext record, or a single message MAY be fragmented across several records). This means that we need to authenticate all records.

In specific usage scenarios, it may be necessary for the Prover to retrieve information from multiple responses originating from different APIs, potentially requiring multiple TLS sessions with a single data source. This implies that the Prover would need to perform multiple rounds of MPC and ZKP. To handle this situation efficiently and reduce overhead, we propose leveraging the same handshake session.

HTTP keep-alive is a mechanism designed to reuse the same connection for multiple HTTP requests and responses. This inherently reduces the number of TLS handshakes, as only one TLS handshake is performed per TCP connection. In other words, with HTTP keep-alive, a single TCP and TLS handshake can accommodate multiple HTTP requests. Consequently, we can transmit different application data pertaining to various requests within a single TLS session.

Furthermore, resuming the TLS session allows the Prover and the DataSource to utilize the same set of keys. The TLS protocol provides a method for session resumption known as Session Tickets, as defined in RFC 5077. The server can utilize a key rotation algorithm to encrypt and decrypt tickets using different keys over time. Additionally, the server can send new tickets to the client after each successful resumption. By implementing this standard, we can ensure an efficient and secure data retrieval process.

Figure 1: Resuming TLS Session with Session Ticket

The content we’ve presented here partially explains TLS. We still need to cover numerous additional procedures and features in this discussion. For a more comprehensive understanding and deeper insight into these topics, we highly recommend referring RFC5246 and RFC8446, specific internet standards that provide detailed information about the various aspects and applications of TLS.

The OT protocol is a technique first introduced by Rabin in 1981. Coupled with Garbled Circuits (GC), it can facilitate secure Multi-Party Computation (MPC) protocols. Parties involved in secure computation employ OT to establish a secure communication channel or to exchange encrypted data. This allows them to construct the GC and execute the computation without exposing sensitive information. To illustrate our protocol, we’ll concentrate on the chosen 1-out-of-2 OT, often called standard OT, though it can easily be extended to 1-out-of-N OT.

There are two roles for OT: sender and receiver. Specifically, the sender has two messages

$m0$

and $m1$

, the receiver has an index $c\in\left\{0, 1\right\}$

and the receiver wants to receive the sender's $c-th$

message without letting the sender know $c$

. At the same time, the sender wants to ensure that the recipient can only receive one of these two messages.In practice, we often let

$m_1=m_0 ⊕ Δ$

, such that$m_c=m_0 ⊕ c* Δ$

, we call it correlated OT.Following shows the detail to implement basic OT based on Diffie-Hellman key exchange:

Figure 3: The Simplest Protocol for Oblivious Transfer

Given that OT serves as a prerequisite for both Garbled Circuits (GC) and Interactive ZeroKnowledge Proofs (IZKP), and considering our need for large-scale utilization of OT, we require an efficient methodology for generating OT instances that are optimally cost-effective. Additionally, to ensure adequate security of the entire zkPass protocol, it’s crucial that our OT protocol possesses active security.

In the following section, we detail the OT protocol we are utilizing, which requires merely 128 basic OT instances. This approach eliminates the need for costly public key encryption for basic one-time passwords and aids in distributing network and computational load effectively, thereby optimizing our protocol’s overall efficiency and security.

GC is an encryption protocol that facilitates secure computation between two parties. It allows two participants, who may not trust each other, to jointly evaluate a function on their respective private inputs without necessitating a trusted third party. For the GC protocol, the function to be evaluated must be defined as a Boolean circuit.

The function underlying this, such as the comparison function in the millionaire problem, is presented as a Boolean circuit with two input gates. Both parties are aware of this circuit. What follows is an illustration of the GC workflow.

A component of a garbling scheme

$G = (G_b, E_n, D_e, E_v, e_v)$

. The function $G_b$

maps $f$

and $k$

to $(F, e, d),$

where strings encode the garbled function, encoding function and decoding function. With $e$

and $x$

one can compute the garbled input $X=E_n(e, x)$

; with $F$

and $X$

one can compute the garbled output $Y = E_v(F, X)$

; knowing $d$

and $Y$

allows for recovery of the final output $y = D_e(d, Y)$

, which must be equal to $e_v(f, x)$

.Figure 4: Foundations of Garbled Circuits

Considering that both the Prover and the Node maintain their respective private keys during the key exchange phase of the 3P handshake, both parties need to use their own private keys to compute the Pre-Master Secret (PMS). This PMS is then further derived into a session key. We employ Garbled Circuits (GC) for this calculation. Below, we detail the specific implementation of GC as used in zkPass.

In Non-interactive Zero-Knowledge (NIZK) Proof systems, such as zk-SNARK and zk-STARK, computations are represented as circuits, and the gate constraints within the circuit are depicted as a set of polynomials. If a computation requires multiple circuits, all these circuits need to be amalgamated into a single, large circuit that is then submitted. Despite the trust assumptions associated with this approach, it necessitates a very large memory space which is typically not feasible within browser environments.

To tackle the issue, we employ VOLE (Vector Oblivious Linear Evaluation)-based IZKP. Its linear nature allows us to submit circuits individually, effectively balancing memory size. Moreover, IZKP doesn’t require a trusted setup, thereby enabling the generation of zero-knowledge proofs in a browser environment.

VOLE is the arithmetic counterpart of string OT. Concretely, the VOLE functionality is a two-party functionality that takes a pair of vectors from the sender

$P_0$

, and allows the receiver $P_1$

to learn a chosen linear combination of these vectors. More formally, given a finite field $F$

, the VOLE functionality takes a pair of vectors $(u, v) \in F_n \times F_n$

from $P_0$

and a scalar $x\in F$

from $P_1$

. It outputs $w = ux + v$

to $P_1$

. We will also consider a randomized version of VOLE where the sender’s inputs $(u, v)$

are picked at random by the functionality and delivered as outputs to the sender. The deterministic VOLE functionality can be easily reduced to the randomized one analogously to the reduction of OT to random OT.Below is how we generate the VOLE instanances:

We employ ICP-ZKP, a variant of interactive protocols that primarily uses symmetric key operations, thus achieving high efficiency and scalability. It can process millions of gates per second and manage large circuits consisting of billions of gates. The protocol includes a method for associating commitment schemes with proof commitment values. The commitment scheme actually serves as an Information-Theoretic Message Authentication Code (IT-MAC). This can be efficiently implemented using VOLE, and its homomorphic properties help to reduce the cost associated with addition gates.

The functionality of the ICP-IZK is described as follows:

We utilize Bristol-style circuits, which are composed of AND, XOR, and INV gates. The format is defined as follows:

When it comes to Modular Circuit design, the size of the response from the DataSource isn’t always certain and can be approximately 1k. This implies that the scale of the whole circuit for authentication and assertion can be quite large, containing millions of gates. For a browser-like system, this is impractical to load the circuit and provide proof due to the significant memory consumption.

However, we can notice that the main circuit is not just a combination of many sub-circuits, but certain sub-circuits are reused repeatedly during the evaluation. Therefore, we can construct a proof system in a modular manner by connecting small specialized ”gadget” circuits in a lightweight fashion. Thanks to the schema of ICP-ZK, we can recycle the memory: once a certain modular circuit is no longer needed in the calculation, all parties can remove the promise of that circuit from the memory.

From a theoretical perspective, modular circuit designs are flexible and reusable. If a computation naturally presents different ”components”, a general-purpose scheme will homogenize them into a single representation, which might lead to performance overhead. Thanks to the schema of ICP-ZK, we can recycle the memory: once a particular modular circuit is no longer needed in the calculation, all parties can remove the promise of that circuit from memory

With a CP scheme one can prove statements of the form "

$commit(x)$

contains $x$

such that $R(x, w)$

" where $commit(x)$

is a commitment. To see how the CP capability can be used for modular composition consider the following example of sequential composition in which one wants to prove that $\exists w, z = h(x, w)$

, where $h(x, w) = g(f(x, w), w)$

. Such a proof can be built by combining two CP systems $Πf$

and $Πg$

for its two building blocks, i.e., respectively $f$

and $g$

: the prover creates a commitment $commit(y)$

of $y$

, and then uses $Πf(resp, Πg)$

to prove that "$commit(y)$

contains $y = f(x, w)$

($resp$

contains $y$

such that $z = g(y, w)$

".Here is what we have done for the sub circuit generation:

Figure 5: Sub Circuit Generation

The authentication is to check whether the corresponding HMAC is correct. HMAC is defined as follows:

From a definition perspective, HMAC proof must have two hash operations. In IZKP, the cost of hashes (usually sha256) is high because the response length is uncertain. We need to make some optimizations.

Our optimization exploits the Merkle–Damgård structure. Suppose

$b_1$

and $b_2$

are two correctly sized blocks. Then $H(b_1 || b_2)$

is computed as $f_H(f_H(S_i, b_1), b_2)$

where $f_h$

denotes the one-way compression function of $H$

, and $S_i$

is the initial state for each round of compression. Based on the above analysis, we separate the HMAC to 4 steps:- Outer hash state:$Outer_{hs} = f_H(S_i, k' \oplus opad)$
- inner hash state:$Inner_{hs} = f_H(S_i, k'\oplus ipad)$
- Inner hash:$Inner_h = f_H(Inner_{hs}, R)$
- $HMAC = f_H(Outer_{hs}, Inner_h)$

Revealing the outer hash state or inner hash state would not reveal

$k'$

since the $f_H$

is assumed to be one way. So during the 3P handshake, node can reveal the $Inner_{hs}$

to Prover, Prover can use $Inner_{hs}$

as public signal and his $R$

as the witness to generate proof that:$π = ZK-PoK\left\{R, Inner_{hs}: Inner_h=f_H(Inner_{hs}, R)\right\}$

. $Inner_h$

also needs to be committed. Then the node commits the $Outer_{hs}$

, so the check of $HMAC$

can be out of the ZK circuits which can significantly reduce cost.Now we will reduce the number of hash operations in HMAC to one. We can create a circuit:

In IZK, parsing JSON poses a significant computational burden. The syntax tree of each node contains numerous potential branches, and the parser must traverse each branch. Consequently, as the length of the JSON string increases, the total number of branches grows exponentially. This exponential growth makes ZKP circuits designed for parsing JSON documents impractical due to their sheer size.

However, in practical scenarios, JSON structures typically do not contain sensitive information. The response from the data source often exhibits a similar structure across different users, with user-specific sensitive data typically represented as scalar JSON values. Consequently, we can alleviate this issue by performing JSON parsing outside of the circuit.

As we have got the ind[n], so the value must be at the index E[n] + 1, so it could be extracted by the wire from E[n] + 1 and E[n] + Len(value), the Len is the bit length function for value. Then we could check Assert(R, E[n] + 1, E[n] + Len(value)）= 1

To facilitate proof of assertions, we utilize a primitive query circuit called Assert. It is designed to easily provide proof of various assertions. Since each case may have different assertions, constructing a large combination circuit with specific query selectors is not feasible. Such an approach would make the main circuit excessively large and difficult to maintain. To address this, our circuit factory offers a range of common circuits. Enterprise users can choose and combine these circuits to fulfill their specific requirements. Currently, we provide support for the following assertion circuits:

- equal or not equal
- greater/less than or equal
- include/all different
- average
- sum
- arithmetic with scalar
- order
- regexp/like
- logical operation
- between or out of range

We conducted a performance evaluation of the complete protocol implementation using specific hardware setups. The Prover was executed on a MacBook Pro 15-inch Mid 2015, equipped with 16GB of 1600MHz DDR3 memory and a 2.5GHz Intel Core i7 processor. This configuration closely resembles the actual user environment. On the other hand, the Verifier was deployed on an AWS c6a.2xlarge instance, which offers 8 virtual CPUs and 16GB of memory (GiB).

**Browser:**

`MacBook Pro 15-inch Mid 2015, 16G Mem 1600 MHz DDR3, 2.5GHz Intel Core i7.`

**Server:**

`aws c6a.2xlarge 8vCPU 16GiB Mem.`

The fields of MPC and ZK related technologies are constantly evolving, with significant advancements being made each year. To ensure the continuous optimization of the zkPass protocol, we are actively staying abreast of the latest technologies. Here are some noteworthy references that we are considering: [FNO14], [NNOB11], [HK20], [HYDK22], [ADST21], [BDOZ10], [FKL+21], [DIO20], [JKO13], [GAZ+21].

**OT.**Traditional OT protocols like IKNP/KOS15 which we use at current time requires security parameter

$λ$

bits of communication for each OTHowever, we propose building a new maliciously secure OT extension based on VOLE which only needs $λ/k$

bits. for any $k$

, at the expense of requiring $2^{k-1}/k$

times the computation. **GC.**Existing GC evaluation methods such as [ZRE14] and [KS08] evaluate GC functions gateby-gate using encrypted truth tables. The GC evaluator decrypts the corresponding output label based on input labels. However, interactive protocols offer more sophisticated techniques. For example, we can expose a (masked) private value to a party, allowing them to perform local computation and feed the resulting cleartext value back into the MPC.

**IZK.**VOLE-based IZK([WYKW20]/[YSWW21]) suffers from high communication overhead, often linear to the circuit size. We are constructing a new ZK protocols with communication sublinear to the circuit size, while maintaining a similar level of computational efficiency.

Last modified 4mo ago