diff --git a/docs/add-code-flow.md b/docs/add-code-flow.md index 264731a3b13b4cc474355299fafd26302aa15941..353d47166b0ac9905dcf38918233fbe1da33ff20 100644 --- a/docs/add-code-flow.md +++ b/docs/add-code-flow.md @@ -55,7 +55,7 @@ Within the function, a new `Adder` is created with the configured `Blockstore` a 1. **[`adder.add(io.Reader)`](https://github.com/ipfs/go-ipfs/blob/v0.4.18/core/coreunix/add.go#L115)** - *Create and return the **root** __DAG__ node* - This method converts the input data (`io.Reader`) to a __DAG__ tree, by splitting the data into _chunks_ using the `Chunker` and organizing them in to a __DAG__ (with a *trickle* or *balanced* layout. See [balanced](https://github.com/ipfs/go-unixfs/blob/6b769632e7eb8fe8f302e3f96bf5569232e7a3ee/importer/balanced/builder.go) for more info). + This method converts the input data (`io.Reader`) to a __DAG__ tree, by splitting the data into _chunks_ using the `Chunker` and organizing them into a __DAG__ (with a *trickle* or *balanced* layout. See [balanced](https://github.com/ipfs/go-unixfs/blob/6b769632e7eb8fe8f302e3f96bf5569232e7a3ee/importer/balanced/builder.go) for more info). The method returns the **root** `ipld.Node` of the __DAG__. @@ -70,7 +70,7 @@ Within the function, a new `Adder` is created with the configured `Blockstore` a - **[MFS] [`PutNode(mfs.Root, path, ipld.Node)`](https://github.com/ipfs/go-mfs/blob/v0.1.18/ops.go#L86)** - *Insert node at path into given `MFS`* - The `path` param is used to determine the `MFS Directory`, which is first looked up in the `MFS` using `lookupDir()` function. This is followed by adding the **root** __DAG__ node (`ipld.Node`) in to this `Directory` using `directory.AddChild()` method. + The `path` param is used to determine the `MFS Directory`, which is first looked up in the `MFS` using `lookupDir()` function. This is followed by adding the **root** __DAG__ node (`ipld.Node`) into this `Directory` using `directory.AddChild()` method. - **[MFS] Add Child To `UnixFS`** - **[`directory.AddChild(filename, ipld.Node)`](https://github.com/ipfs/go-mfs/blob/v0.1.18/dir.go#L350)** - *Add **root** __DAG__ node under this directory* diff --git a/docs/config.md b/docs/config.md index e40ac5887e4cdce2b84657ea4494aaac21afd44d..a6e9c699f4d72fb6b4dd65c3fcd053e2010af79f 100644 --- a/docs/config.md +++ b/docs/config.md @@ -846,7 +846,7 @@ Options for [ZeroConf](https://github.com/libp2p/zeroconf#readme) Multicast DNS- #### `Discovery.MDNS.Enabled` -A boolean value for whether or not Multicast DNS-SD should be active. +A boolean value to activate or deactivate Multicast DNS-SD. Default: `true` @@ -934,7 +934,7 @@ Type: `object[string -> array[string]]` ### `Gateway.RootRedirect` -A url to redirect requests for `/` to. +A URL to redirect requests for `/` to. Default: `""` @@ -1410,7 +1410,7 @@ Type: `string` (filesystem path) ### `Mounts.FuseAllowOther` -Sets the 'FUSE allow other'-option on the mount point. +Sets the 'FUSE allow-other' option on the mount point. ## `Pinning` diff --git a/docs/datastores.md b/docs/datastores.md index f574bc6a5ddbf86dbe154db89477575aabab628e..9ba500a5956c89ff1cb19365736844c3710eec20 100644 --- a/docs/datastores.md +++ b/docs/datastores.md @@ -12,13 +12,13 @@ field in the ipfs configuration file. ## flatfs -Stores each key value pair as a file on the filesystem. +Stores each key-value pair as a file on the filesystem. The shardFunc is prefixed with `/repo/flatfs/shard/v1` then followed by a descriptor of the sharding strategy. Some example values are: - `/repo/flatfs/shard/v1/next-to-last/2` - Shards on the two next to last characters of the key - `/repo/flatfs/shard/v1/prefix/2` - - Shards based on the two character prefix of the key + - Shards based on the two-character prefix of the key ```json { @@ -34,7 +34,7 @@ The shardFunc is prefixed with `/repo/flatfs/shard/v1` then followed by a descri NOTE: flatfs must only be used as a block store (mounted at `/blocks`) as it only partially implements the datastore interface. You can mount flatfs for /blocks only using the mount datastore (described below). ## levelds -Uses a leveldb database to store key value pairs. +Uses a leveldb database to store key-value pairs. ```json { @@ -46,7 +46,7 @@ Uses a leveldb database to store key value pairs. ## pebbleds -Uses [pebble](https://github.com/cockroachdb/pebble) as a key value store. +Uses [pebble](https://github.com/cockroachdb/pebble) as a key-value store. ```json { @@ -90,7 +90,7 @@ When installing a new version of kubo when `"formatMajorVersion"` is configured, ## badgerds -Uses [badger](https://github.com/dgraph-io/badger) as a key value store. +Uses [badger](https://github.com/dgraph-io/badger) as a key-value store. > [!CAUTION] > This is based on very old badger 1.x, which has known bugs and is no longer supported by the upstream team. @@ -99,7 +99,7 @@ Uses [badger](https://github.com/dgraph-io/badger) as a key value store. * `syncWrites`: Flush every write to disk before continuing. Setting this to false is safe as kubo will automatically flush writes to disk before and after performing critical operations like pinning. However, you can set this to true to be extra-safe (at the cost of a 2-3x slowdown when adding files). -* `truncate`: Truncate the DB if a partially written sector is found (defaults to true). There is no good reason to set this to false unless you want to manually recover partially written (and unpinned) blocks if kubo crashes half-way through adding a file. +* `truncate`: Truncate the DB if a partially written sector is found (defaults to true). There is no good reason to set this to false unless you want to manually recover partially written (and unpinned) blocks if kubo crashes half-way through a write operation. ```json { diff --git a/docs/experimental-features.md b/docs/experimental-features.md index fbee3d480b134fc44e341c9929598aba2df53654..ef55691ba8c829f4fae911057c59f8feac0eb4e6 100644 --- a/docs/experimental-features.md +++ b/docs/experimental-features.md @@ -398,7 +398,7 @@ We also support the use of protocol names of the form /x/$NAME/http where $NAME ### Road to being a real feature - [ ] Needs p2p streams to graduate from experiments -- [ ] Needs more people to use and report on how well it works / fits use cases +- [ ] Needs more people to use and report on how well it works and fits use cases - [ ] More documentation - [ ] Need better integration with the subdomain gateway feature. diff --git a/docs/implement-api-bindings.md b/docs/implement-api-bindings.md index 3587ac21f471c2e815ddb43fe12cf002b8869d44..d0273d9e735828bab285327e33ec58772eed4947 100644 --- a/docs/implement-api-bindings.md +++ b/docs/implement-api-bindings.md @@ -39,12 +39,12 @@ function calls. For example: #### CLI API Transport In the commandline, IPFS uses a traditional flag and arg-based mapping, where: -- the first arguments selects the command, as in git - e.g. `ipfs dag get` +- the first arguments select the command, as in git - e.g. `ipfs dag get` - the flags specify options - e.g. `--enc=protobuf -q` - the rest are positional arguments - e.g. `ipfs key rename <name> <newName>` - files are specified by filename, or through stdin -(NOTE: When kubo runs the daemon, the CLI API is actually converted to HTTP +(NOTE: When kubo runs the daemon, the CLI API is converted to HTTP calls. otherwise, they execute in the same process) #### HTTP API Transport @@ -87,7 +87,7 @@ Despite all the generalization spoken about above, the IPFS API is actually very simple. You can inspect all the requests made with `nc` and the `--api` option (as of [this PR](https://github.com/ipfs/kubo/pull/1598), or `0.3.8`): -``` +```sh > nc -l 5002 & > ipfs --api /ip4/127.0.0.1/tcp/5002 swarm addrs local --enc=json POST /api/v0/version?enc=json&stream-channels=true HTTP/1.1 @@ -104,7 +104,7 @@ The only hard part is getting the file streaming right. It is (now) fairly easy to stream files to kubo using multipart. Basically, we end up with HTTP requests like this: -``` +```sh > nc -l 5002 & > ipfs --api /ip4/127.0.0.1/tcp/5002 add -r ~/demo/basic/test POST /api/v0/add?encoding=json&progress=true&r=true&stream-channels=true HTTP/1.1 diff --git a/docs/releases.md b/docs/releases.md index d42feea7bc892d90759f3e17669c8ae0c709ee26..718c2da9326857d1347526f9faabf896a9663895 100644 --- a/docs/releases.md +++ b/docs/releases.md @@ -20,9 +20,9 @@ ## Release Philosophy -`kubo` aims to have release every six weeks, two releases per quarter. During these 6 week releases, we go through 4 different stages that gives us the opportunity to test the new version against our test environments (unit, interop, integration), QA in our current production environment, IPFS apps (e.g. Desktop and WebUI) and with our community and _early testers_<sup>[1]</sup> that have IPFS running in production. +`kubo` aims to have a release every six weeks, two releases per quarter. During these 6 week releases, we go through 4 different stages that allow us to test the new version against our test environments (unit, interop, integration), QA in our current production environment, IPFS apps (e.g. Desktop and WebUI) and with our community and _early testers_<sup>[1]</sup> that have IPFS running in production. -We might expand the six week release schedule in case of: +We might expand the six-week release schedule in case of: - No new updates to be added - In case of a large community event that takes the core team availability away (e.g. IPFS Conf, Dev Meetings, IPFS Camp, etc.) @@ -59,7 +59,7 @@ Test the release in as many non-production environments as possible. This is rel ### Stage 3 - Community Prod Testing -At this stage, we consider the release to be "production ready" and will ask the community and our early testers to (partially) deploy the release to their production infrastructure. +At this stage, we consider the release to be "production-ready" and will ask the community and our early testers to (partially) deploy the release to their production infrastructure. **Goals:** @@ -69,7 +69,7 @@ At this stage, we consider the release to be "production ready" and will ask the ### Stage 4 - Release -At this stage, the release is "battle hardened" and ready for wide deployment. +At this stage, the release is "battle-hardened" and ready for wide deployment. ## Release Cycle