Lighthouse
Ask or search…
K

Upload, PoDSI, and Deal-Making

In this Section A, we will discuss the following

1) Upload via Lighthouse SDK

Step 1: Upload your first file using Lighthouse SDK

Firstly, you'll need to get a picture of your favorite pupper whose picture you'll want to store on the decentralized web.
The Lighthouse SDK is a JavaScript library that allows you to upload files to the Filecoin network. It's open source and available here
A. Uploading a file is as simple as:
1
import lighthouse from "@lighthouse-web3/sdk";
2
// ... other code
3
const uploadResponse = await lighthouse.upload('/path/to/adorable/dog.jpg', 'YOUR_API_KEY');
Previously, if the file of your puppy was too small, it would encounter issues being stored on the chain due to size minimums enforced by on-chain deal makers. The SDK helps you get around this by adding mock data (in deal parameters below) to your file to meet the minimum size requirements.
B. To upload a file with replication:
Replication is the process of making multiple copies of your file and storing them on the Filecoin network. This ensures that if one storage provider goes down, you'll still be able to retrieve your file from another storage provider.
You can get the API key from https://files.lighthouse.storage/ or via CLI
1
import lighthouse from "@lighthouse-web3/sdk";
2
// ... other code
3
// Indicates that the deal to maintain your file will be replicated to a total of two copies on the network.
4
const dealParams = {
5
num_copies: 2,
6
};
7
// The `false` indicates that we're uploading a single file.
8
// Returns a CID (Content ID) for your file that you can use for PoDSI verification.
9
const uploadResponse = await lighthouse.upload('/path/to/adorable/dog.jpg', 'YOUR_API_KEY', false, dealParams);

Step 2: Set Deal Parameters

Note: Deal parameters are currently supported on the Calibration testnet. If you don't specify deal parameters, then deal is made on Filecoin mainnet
When uploading a file, you can customize how it's stored in Lighthouse using the deal parameters:
num_copies: Decide how many backup copies you want for your file. The Max limit is 3. For instance, if set to 3, your file will be stored by 3 different storage providers.
repair_threshold: Determines when a storage sector is considered "broken" if a provider fails to confirm they still have your file. It's measured in "epochs", with 28800 epochs being roughly 10 days.
renew_threshold: Specifies when your storage deal should be renewed. It's also measured in epochs.
miner: If you have preferred miners, list their addresses here. For testing, it's recommended to use t017840.
network: This should always be set to 'calibration' (for RAAS services to function) unless you want to use the mainnet.
add_mock_data: This field is used to make smaller files reach the minimum file size accepted on the Lighthouse calibration test network (1 MB). If your file is less than the minimum size, add_mock_data will append a mock file to ensure it meets the storage requirements. The value indicates the size in MB. For instance, if your file is 256KB, the add_mock_data should be set to 2 to the minimum target.
The term "epoch" can be thought of as a time unit in the filecoin network under which various operations occur, like PoST, PoRep, etc., with 2880 epochs equivalent to a day.
Example:
// Sample JSON of deal parameters
const dealParams = {
num_copies: 2,
repair_threshold: 28800,
renew_threshold: 240,
miner: ["t017840"],
network: 'calibration',
add_mock_data: 2
};
1
const path = "/path/to/file.jpg"
2
const apiKey = "thisisaateststring"
3
4
const dealParam_default = {
5
"network":"calibration"
6
}
7
8
// adds mock data for satisfying minimum file size
9
const dealParam_mock = {
10
"add_mock_data": 4,
11
"network":"calibration"
12
}
13
14
// To ignore a deal parameter set it as null
15
const dealParam_ignore = {
16
"replication_num_copies":null,
17
"repair_threshold":null,
18
"renewal_threshold":null,
19
"network":"calibration"
20
}
21
22
// Default parameters set. All RaaS workers enabled, any miners can take the deal. 2 MiB mock file added.
23
const response = await lighthouse.upload(path, apiKey, false,dealParam_default);
24
25
26
//this should be used if the user wants to bundle in a 4MiB mock file with their user submission.
27
const response = await lighthouse.upload(path, apiKey, false, dealParam_mock);
28
29
//this needs to be used by the self hosted RaaS module, and the aggregator SDK after the event gets emitted. Turns off all RaaS workers. 2 MiB mock file added.
30
const response = await lighthouse.upload(path, apiKey, false, dealParam_ignore);

Step 3: Understanding PoDSI: Getting the PoDSI for your file

Now that you've registered the picture of your puppy, how would you know that it's actually being maintained on the Filecoin network? This is where the PoDSI comes in. The PoDSI is a proof that your file is being maintained on the Filecoin network.
Proof of Data Segment Inclusion (PoDSI) is like a certificate of authenticity. It assures that your file is safely tucked inside a special package, known as a "deal", made by the Lighthouse Node. This node combines several files, gives them a unique ID, offers proof of their inclusion, and even throws a mini-proof of the entire package's structure.
The time between uploading and being able to get your PoDSI should only be a few minutes. You can get the PoDSI for your file by calling the getProof function in one of the following ways:
via Axios in node.js
1
let response = await axios.get("https://api.lighthouse.storage/api/lighthouse/get_proof", {
2
params: {
3
cid: lighthouse_cid,
4
network: "testnet" // Change the network to mainnet when ready
5
}
6
})
or via curl
# Assumes that uploaded your file to mainnet.
# Alternatively, if you are using testnet, add &network=testnet to the end of the URL.
curl https://api.lighthouse.storage/api/lighthouse/get_proof?cid=<puppy_CID>
curl example:
# An example of how to get the PoDSI for a file uploaded to testnet
curl https://api.lighthouse.storage/api/lighthouse/get_proof?cid=QmS7Do1mDZNBJAVyE8N9r6wYMdg27LiSj5W9mmm9TZoeWp&network=testnet
The response, an example of a PoDSI proof on Calibration, should look something like this:
PoDSI response
1
{
2
"pieceCID": "baga6ea4seaqgbiszkxkzmaxio5zjucpg2sd4n6abvmcsenah27g4xtjszxtzmia",
3
"pieceSize": 4194304,
4
"carFileSize": 4161536,
5
"proof": {
6
"pieceCID": "baga6ea4seaqn6s6n3irnz2ewfwlybhpjzrg6i57fzuwletj5sxcv7hz5rauewli",
7
"id": "19845d2a-4fae-426c-893d-491770c317e8",
8
"lastUpdate": 1692888301,
9
"fileProof": {
10
"verifierData": {
11
"commPc": "0181e203922020df4bcdda22dce8962d97809de9cc4de477e5cd2cb24d3d95c55f9f3d88284b2d",
12
"sizePc": "200000"
13
},
14
"inclusionProof": {
15
"proofIndex": {
16
"index": "ffe0",
17
"path": [
18
"f5a5fd42d16a20302798ef6ed309979b43003d2320d9f0e8ea9831a92759fb0b",
19
"3731bb99ac689f66eef5973e4a94da188f4ddcae580724fc6f3fd60dfd488333",
20
"642a607ef886b004bf2c1978463ae1d4693ac0f410eb2d1b7a47fe205e5e750f",
21
"57a2381a28652bf47f6bef7aca679be4aede5871ab5cf3eb2c08114488cb8526",
22
"1f7ac9595510e09ea41c460b176430bb322cd6fb412ec57cb17d989a4310372f",
23
"fc7e928296e516faade986b28f92d44a4f24b935485223376a799027bc18f833",
24
"08c47b38ee13bc43f41b915c0eed9911a26086b3ed62401bf9d58b8d19dff624",
25
"b2e47bfb11facd941f62af5c750f3ea5cc4df517d5c4f16db2b4d77baec1a32f",
26
"f9226160c8f927bfdcc418cdf203493146008eaefb7d02194d5e548189005108",
27
"2c1a964bb90b59ebfe0f6da29ad65ae3e417724a8f7c11745a40cac1e5e74011",
28
"fee378cef16404b199ede0b13e11b624ff9d784fbbed878d83297e795e024f02",
29
"8e9e2403fa884cf6237f60df25f83ee40dca9ed879eb6f6352d15084f5ad0d3f",
30
"752d9693fa167524395476e317a98580f00947afb7a30540d625a9291cc12a07",
31
"7022f60f7ef6adfa17117a52619e30cea82c68075adf1c667786ec506eef2d19",
32
"d99887b973573a96e11393645236c17b1f4c7034d723c7a99f709bb4da61162b",
33
"df4bcdda22dce8962d97809de9cc4de477e5cd2cb24d3d95c55f9f3d88284b2d"
34
]
35
},
36
"proofSubtree": {
37
"index": "0",
38
"path": [
39
"83ccb895e53b292546ccda9c45017c247ffa54b406f492605c9148e09aa2f208"
40
]
41
}
42
},
43
"indexRecord": {
44
"checksum": "4a8e39cfd5af583596f54f95954a991b",
45
"proofIndex": "df4bcdda22dce8962d97809de9cc4de477e5cd2cb24d3d95c55f9f3d88284b2d",
46
"proofSubtree": 0,
47
"size": 2097152
48
}
49
}
50
},
51
"dealInfo": [
52
{
53
"dealUUID": "f064d4d5-7b35-4647-8df7-91fb8fb99f23",
54
"dealId": 13279,
55
"storageProvider": "t017840"
56
},
57
{
58
"dealUUID": "ae8f6709-5ca0-4944-abb1-cd04cf05e0c3",
59
"dealId": null,
60
"storageProvider": "t017819"
61
}
62
],
63
"previousAggregates": [
64
"975afcd3-ff3e-4395-a50e-24500ca0bfb7"
65
]
66
}
  1. 1.
    The pieceCID is a content identifier used for referencing data in distributed information systems by it’s contents rather than its location using cryptographic hashing and self-describing formats. A core component of IPFS and IPLD, you can read more about it here.
  2. 2.
    The proof contains information that can be used to confirm whether your file was included in a specific aggregated data bundle.
  3. 3.
    The dealInfo provides details about the file's storage deal. If the "dealId" is null, it means that the storage deal has been initiated but the miner hasn't started the sealing process yet.
  4. 4.
    The previousAggregates parameter lists older aggregate IDs for the file, if the file's storage deal has been renewed. You can use these IDs to get more details about previous aggregates. To do this, use the provided API link, substituting the appropriate aggregate ID and network information.
Previous Aggregates Info
To get information about a previous aggregate with the ID '975afcd3-ff3e-4395-a50e-24500ca0bfb7' on the Testnet, you would use the following:
curl https://api.lighthouse.storage/api/lighthouse/aggregate_info?aggregateId=975afcd3-ff3e-4395-a50e-24500ca0bfb7&network=testnet

Step 4: Get your deal ID from your upload

When you upload the picture of your puppy, the on-chain deal that is made to store it on the Filecoin network is assigned a unique deal ID. You can get this deal ID the same way you get the PoDSI for your file. In the above, it would be accessible through response.data.deal_id.
Under the hood, the node infrastructure is working hard to ensure that your file is included on-chain. The process of deal making can take up to about an hour.

Step 5: Download your file using the file’s CID

Now that your file is stored on the Filecoin network, you can retrieve it using its CID. You can do this by calling the download function in one of the following ways:
via CLI:
# Assumes that you have lighthouse-cli installed. If not, feel free to download it using
# npm install -g @lighthouse-web3/sdk
curl -o fileName https://gateway.lighthouse.storage/ipfs/<cid>
or via Code:
1
const lighthouseDealDownloadEndpoint = https://gateway.lighthouse.storage/ipfs/'
2
3
let response = await axios({
4
method: 'GET',
5
url: `${lighthouseDealDownloadEndpoint}${lighthouse_cid}`,
6
responseType: 'stream',
7
});
8
9
try {
10
const filePath = await this.saveResponseToFile(response, downloadPath);
11
console.log(`File saved at ${filePath}`);
12
return filePath
13
} catch (err) {
14
console.error(`Error saving file: ${err}`);
15
}
16
17
saveResponseToFile(response, filePath) {
18
const writer = fs.createWriteStream(filePath);
19
20
// Pipe the response data to the file
21
response.data.pipe(writer);
22
23
return new Promise((resolve, reject) => {
24
writer.on('finish', () => resolve(filePath));
25
writer.on('error', (err) => {
26
console.error(err);
27
reject(err);
28
});
29
});
30
}

2) Upload via Lighthouse Smart Contract

In this method, we will pass a cid to Lighthouse Smart Contract deployed on the following address
  • Calibration Testnet: 0x01ccBC72B2f0Ac91B79Ff7D2280d79e25f745960
The source code for this contract can be found here

Smart Contract Interface

Within the smart contract interface, some important features are critical to the RaaS service. These include:
#
Function Name
Purpose
Key Parameters
Outcome
1
submit
Function that submits a new deal request to the oracle and will creates a new deal. By default, there will be no renewals and replications for this deal
_cid
Event: SubmitAggregatorRequest
2
submitRaaS
Function that submits a new deal request to the oracle and will creates a new deal. Here user can define deal parameters.
_cid, _replication_target, _repair_threshold, _renew_threshold
Event:SubmitAggregatorRequestWithRaaS
3
getAllDeals
Get all deal IDs for a specified cid
_cid
Deal[]
4
getActiveDeals
return all the _cid's active dealIds. Critical for replication deals.
_cid
Deal[]
5
getExpiringDeals
return all the deals' dealIds if they are expiring within epochs. Critical for renewal and repair jobs.
_cid, epochs
Deal[]

Calling SubmitRaaS Function

You can interact with the smart contract by submitting a CID of your choice to the submit function. This will create a new deal request that the Lighthouse RaaS Worker will pick up when attached as discussed in Section B.
1
// contractInstance is the address of the contract you deployed or the aggregator-hosted RaaS address above.
2
const dealStatus = await ethers.getContractAt("DealStatus", contractInstance);
3
// Submit the CID of the file you want to upload to the Filecoin network in the following way.
4
await dealStatus.submit(ethers.utils.toUtf8Bytes(newJob.cid), 2, 4, 40);
Upload with the submit function will not start deal-making by default on the Filecoin network. To start deal-making for the cid passed through the submit function, refer to Section B of Attaching RaaS (renew, repair, replication) Worker

3) Why does all this matter?

We see a bright future in enabling programmable, immutable, decentralized data storage for developers.
Lighthouse SDK is designed to be simple and easy to use. We hope that this will enable developers to easily integrate the Filecoin network as the primary data storage layer for their applications.
More importantly, this enables developers to build novel applications. Imagine a dapp or DAO that can be built to incentivize, analyze and store upload metadata on-chain. There are a couple of examples of this:
  • Rewarding $TOKEN based on the upload of a particular file and their CID.
  • Being able to track CIDs and deal IDs onchain for verification and airdropping.
  • Building more advanced, robust DataDAOs (check out the starter kit here!)
For your consideration, here's some pseudocode of how you could build a simple dapp that rewards users for uploading files to the Filecoin network:
function uploadFile(bytes32 fileCID) public {
// Check if the file has already been uploaded
require(!fileExists(fileCID), "File already exists");
// Check if the user's file contains the correct data
// The logic in verifyPoDSI() depends on your specific application
// Check out the various possibilities here https://docs.filecoin.io/smart-contracts/developing-contracts/solidity-libraries/
require(verifyPoDSI(fileCID), "File does not contain the correct data");
// Save the file's CID to prevent against replay attacks
saveFile(fileCID);
// Reward the user for uploading the file
// You can mint them a token or send them some $FIL
// Read more here: https://docs.filecoin.io/smart-contracts/developing-contracts/ethereum-libraries/#example-using-an-erc20-contract
rewardUser(msg.sender);
}