From 7e37e8c3f28bd67a10920c4cae38787912b56f18 Mon Sep 17 00:00:00 2001 From: Mateusz Jakub Fila Date: Thu, 29 Jan 2026 14:36:08 +0100 Subject: [PATCH] fix a few spelling typos in the documentation --- doc/asio.qbk | 4 ++-- doc/fiber.qbk | 2 +- doc/migration.qbk | 4 ++-- doc/numa.qbk | 4 ++-- doc/overview.qbk | 2 +- doc/rationale.qbk | 6 +++--- doc/speculative.qbk | 8 ++++---- doc/stack.qbk | 2 +- doc/tuning.qbk | 12 ++++++------ doc/unbuffered_channel.qbk | 2 +- 10 files changed, 23 insertions(+), 23 deletions(-) diff --git a/doc/asio.qbk b/doc/asio.qbk index 4011862b..2d5af34e 100644 --- a/doc/asio.qbk +++ b/doc/asio.qbk @@ -92,7 +92,7 @@ synchronize different fibers and use asynchronous network I/O at the same time. __boost_fiber__ provides the same classes and interfaces as __boost_thread__. -Therefore developers are able to use patterns familiar from multi-threaded +Therefore developers are able to use patterns familiar from multithreaded programming. For instance the strategy 'serve one client with one thread' could be transformed into 'serve one client with one fiber'. @@ -103,7 +103,7 @@ directory. The author believes, that a better, more tight integration is possible but requires input of boost.asio's author and maybe some changes in the boost.asio framework. -The current integration pattern requires to runn __io_service__ in +The current integration pattern requires to run __io_service__ in __run_service__ (separate fiber). diff --git a/doc/fiber.qbk b/doc/fiber.qbk index ea936e9c..d066cf73 100644 --- a/doc/fiber.qbk +++ b/doc/fiber.qbk @@ -574,7 +574,7 @@ implementation-defined total order of `fiber::id` values places `*this` before operator<<( std::basic_ostream< charT, traitsT > & os, id const& other); [variablelist -[[Efects:] [Writes the representation of `other` to stream `os`. The +[[Effects:] [Writes the representation of `other` to stream `os`. The representation is unspecified.]] [[Returns:] [`os`]] ] diff --git a/doc/migration.qbk b/doc/migration.qbk index 4093518d..e24dbe45 100644 --- a/doc/migration.qbk +++ b/doc/migration.qbk @@ -22,8 +22,8 @@ creates a dispatcher fiber for each thread [mdash] this cannot migrate either.][superscript,][footnote Of course it would be problematic to migrate a fiber that relies on [link thread_local_storage thread-local storage].] -Migrating a fiber from a logical CPU with heavy workload to another -logical CPU with a lighter workload might speed up the overall execution. +Migrating a fiber from a logical CPU with heavy work-load to another +logical CPU with a lighter work-load might speed up the overall execution. Note that in the case of NUMA-architectures, it is not always advisable to migrate data between threads. Suppose fiber ['f] is running on logical CPU ['cpu0] which belongs to NUMA node ['node0]. The data of ['f] are allocated on diff --git a/doc/numa.qbk b/doc/numa.qbk index df8ca646..dd38e9f1 100644 --- a/doc/numa.qbk +++ b/doc/numa.qbk @@ -110,7 +110,7 @@ In order to keep the memory access local as possible, the NUMA topology must be node: 1 | cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 | distance: 21 10 done -The example shows that the systems consits out of 2 NUMA-nodes, to each NUMA-node belong +The example shows that the systems consists out of 2 NUMA-nodes, to each NUMA-node belong 16 logical cpus. The distance measures the costs to access the memory of another NUMA-node. A NUMA-node has always a distance `10` to itself (lowest possible value).[br] The position in the array corresponds with the NUMA-node ID. @@ -370,7 +370,7 @@ fiber scheduler).]] [[Returns:] [the fiber at the head of the ready queue, or `nullptr` if the queue is empty.]] [[Throws:] [Nothing.]] -[[Note:] [Placing ready fibers onto the tail of the sahred queue, and returning them +[[Note:] [Placing ready fibers onto the tail of the shared queue, and returning them from the head of that queue, shares the thread between ready fibers in round-robin fashion.]] ] diff --git a/doc/overview.qbk b/doc/overview.qbk index 3d7b3cdf..ddc874d9 100644 --- a/doc/overview.qbk +++ b/doc/overview.qbk @@ -10,7 +10,7 @@ __boost_fiber__ provides a framework for micro-/userland-threads (fibers) scheduled cooperatively. The API contains classes and functions to manage and synchronize fibers -similiarly to __std_thread__. +similarly to __std_thread__. Each fiber has its own stack. diff --git a/doc/rationale.qbk b/doc/rationale.qbk index 7fdda7db..0884a12a 100644 --- a/doc/rationale.qbk +++ b/doc/rationale.qbk @@ -22,12 +22,12 @@ [ [BOOST_FIBERS_SPIN_BACKOFF] [limit determines when to used `std::this_thread::yield()` instead of - mnemonic `pause/yield` during busy wait (apllies on to `XCHG`-spinlock)] + mnemonic `pause/yield` during busy wait (applies on to `XCHG`-spinlock)] ] [ [BOOST_FIBERS_SINGLE_CORE] - [allways call `std::this_thread::yield()` without backoff during busy wait - (apllies on to `XCHG`-spinlock)] + [always call `std::this_thread::yield()` without backoff during busy wait + (applies on to `XCHG`-spinlock)] ] ] diff --git a/doc/speculative.qbk b/doc/speculative.qbk index 71c8b64b..07a084d8 100644 --- a/doc/speculative.qbk +++ b/doc/speculative.qbk @@ -6,7 +6,7 @@ ] [#speculation] -[section:speculation Specualtive execution] +[section:speculation Speculative execution] [heading Hardware transactional memory] @@ -14,13 +14,13 @@ With help of hardware transactional memory multiple logical processors execute a critical region speculatively, e.g. without explicit synchronization.[br] If the transactional execution completes successfully, then all memory -operations performed within the transactional region are commited without any +operations performed within the transactional region are committed without any inter-thread serialization.[br] When the optimistic execution fails, the processor aborts the transaction and discards all performed modifications.[br] In non-transactional code a single lock serializes the access to a critical region. With a transactional memory, multiple logical processor start a -transaction and update the memory (the data) inside the ciritical region. +transaction and update the memory (the data) inside the critical region. Unless some logical processors try to update the same data, the transactions would always succeed. @@ -30,7 +30,7 @@ would always succeed. TSX is Intel's implementation of hardware transactional memory in modern Intel processors[footnote intel.com: [@https://software.intel.com/en-us/node/695149 Intel Transactional Synchronization Extensions]].[br] -In TSX the hardware keeps track of which cachelines have been read from and +In TSX the hardware keeps track of which cache-lines have been read from and which have been written to in a transaction. The cache-line size (64-byte) and the n-way set associative cache determine the maximum size of memory in a transaction. For instance if a transaction modifies 9 cache-lines at a diff --git a/doc/stack.qbk b/doc/stack.qbk index fa64c8f9..a52916e8 100644 --- a/doc/stack.qbk +++ b/doc/stack.qbk @@ -294,7 +294,7 @@ as stack space which suppresses the errors. Sanitizers (GCC/Clang) are confused by the stack switches. The library (and Boost.Context too) is required to be compiled with property (b2 command-line) -`context-impl=ucontext` and compilers santizer options. +`context-impl=ucontext` and compilers sanitizer options. Users must define `BOOST_USE_ASAN` before including any Boost.Context headers when linking against Boost binaries. diff --git a/doc/tuning.qbk b/doc/tuning.qbk index 4e7f5cac..2fb5a526 100644 --- a/doc/tuning.qbk +++ b/doc/tuning.qbk @@ -21,7 +21,7 @@ and/or fibers are not synchronized between threads. Memory allocation algorithm is significant for performance in a multithreaded environment, especially for __boost_fiber__ where fiber stacks are allocated on the heap. The default user-level memory allocator (UMA) of glibc is ptmalloc2 -but it can be replaced by another UMA that fit better for the concret work-load +but it can be replaced by another UMA that fit better for the concrete work-load For instance Google[s] [@http://goog-perftools.sourceforge.net/doc/tcmalloc.html TCmalloc] enables a better performance at the ['skynet] microbenchmark than glibc[s] default memory @@ -35,7 +35,7 @@ cooperatively, rather than preemptively. Depending on the work-load several strategies of scheduling the fibers are possible [footnote 1024cores.net: [@http://www.1024cores.net/home/scalable-architecture/task-scheduling-strategies Task Scheduling Strategies]] -that can be implmented on behalf of __algo__. +that can be implemented on behalf of __algo__. * work-stealing: ready fibers are hold in a local queue, when the fiber-scheduler's local queue runs out of ready fibers, it randomly @@ -51,11 +51,11 @@ that can be implmented on behalf of __algo__. concurrently push and pop ready fibers to/from the global queue (implemented in __shared_work__) -* work-distribution: fibers that became ready are proactivly distributed to +* work-distribution: fibers that became ready are proactively distributed to idle fiber-schedulers or fiber-schedulers with low load * work-balancing: a dedicated (helper) fiber-scheduler periodically collects - informations about all fiber-scheduler running in other threads and + information about all fiber-scheduler running in other threads and re-distributes ready fibers among them @@ -121,12 +121,12 @@ are disabled.] Modern multi-socket systems are usually designed as [link numa NUMA systems]. A suitable fiber scheduler like __numa_work_stealing__ reduces -remote memory access (latence). +remote memory access (latency). [heading Parameters] -[table Parameters that migh be defiend at compiler's command line +[table Parameters that might be defined at compiler's command line [ [Parameter] [Default value] diff --git a/doc/unbuffered_channel.qbk b/doc/unbuffered_channel.qbk index f03c9a87..7eed3d05 100644 --- a/doc/unbuffered_channel.qbk +++ b/doc/unbuffered_channel.qbk @@ -7,7 +7,7 @@ [section:unbuffered_channel Unbuffered Channel] -__boost_fiber__ provides template `unbuffered_channel` suitable to synchonize +__boost_fiber__ provides template `unbuffered_channel` suitable to synchronize fibers (running on same or different threads) via synchronous message passing. A fiber waiting to consume an value will block until the value is produced. If a fiber attempts to send a value through an unbuffered channel and no fiber