Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/asio.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ synchronize different fibers and use asynchronous network I/O at the same
time.

__boost_fiber__ provides the same classes and interfaces as __boost_thread__.
Therefore developers are able to use patterns familiar from multi-threaded
Therefore developers are able to use patterns familiar from multithreaded
programming. For instance the strategy 'serve one client with one thread'
could be transformed into 'serve one client with one fiber'.

Expand All @@ -103,7 +103,7 @@ directory. The author believes, that a better, more tight integration is
possible but requires input of boost.asio's author and maybe some changes in the
boost.asio framework.

The current integration pattern requires to runn __io_service__ in
The current integration pattern requires to run __io_service__ in
__run_service__ (separate fiber).


Expand Down
2 changes: 1 addition & 1 deletion doc/fiber.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -574,7 +574,7 @@ implementation-defined total order of `fiber::id` values places `*this` before
operator<<( std::basic_ostream< charT, traitsT > & os, id const& other);

[variablelist
[[Efects:] [Writes the representation of `other` to stream `os`. The
[[Effects:] [Writes the representation of `other` to stream `os`. The
representation is unspecified.]]
[[Returns:] [`os`]]
]
Expand Down
4 changes: 2 additions & 2 deletions doc/migration.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ creates a dispatcher fiber for each thread [mdash] this cannot migrate
either.][superscript,][footnote Of course it would be problematic to migrate a
fiber that relies on [link thread_local_storage thread-local storage].]

Migrating a fiber from a logical CPU with heavy workload to another
logical CPU with a lighter workload might speed up the overall execution.
Migrating a fiber from a logical CPU with heavy work-load to another
logical CPU with a lighter work-load might speed up the overall execution.
Note that in the case of NUMA-architectures, it is not always advisable to
migrate data between threads. Suppose fiber ['f] is running on logical CPU
['cpu0] which belongs to NUMA node ['node0]. The data of ['f] are allocated on
Expand Down
4 changes: 2 additions & 2 deletions doc/numa.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ In order to keep the memory access local as possible, the NUMA topology must be
node: 1 | cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 | distance: 21 10
done

The example shows that the systems consits out of 2 NUMA-nodes, to each NUMA-node belong
The example shows that the systems consists out of 2 NUMA-nodes, to each NUMA-node belong
16 logical cpus. The distance measures the costs to access the memory of another NUMA-node.
A NUMA-node has always a distance `10` to itself (lowest possible value).[br]
The position in the array corresponds with the NUMA-node ID.
Expand Down Expand Up @@ -370,7 +370,7 @@ fiber scheduler).]]
[[Returns:] [the fiber at the head of the ready queue, or `nullptr` if the
queue is empty.]]
[[Throws:] [Nothing.]]
[[Note:] [Placing ready fibers onto the tail of the sahred queue, and returning them
[[Note:] [Placing ready fibers onto the tail of the shared queue, and returning them
from the head of that queue, shares the thread between ready fibers in
round-robin fashion.]]
]
Expand Down
2 changes: 1 addition & 1 deletion doc/overview.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
__boost_fiber__ provides a framework for micro-/userland-threads (fibers)
scheduled cooperatively.
The API contains classes and functions to manage and synchronize fibers
similiarly to __std_thread__.
similarly to __std_thread__.

Each fiber has its own stack.

Expand Down
6 changes: 3 additions & 3 deletions doc/rationale.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@
[
[BOOST_FIBERS_SPIN_BACKOFF]
[limit determines when to used `std::this_thread::yield()` instead of
mnemonic `pause/yield` during busy wait (apllies on to `XCHG`-spinlock)]
mnemonic `pause/yield` during busy wait (applies on to `XCHG`-spinlock)]
]
[
[BOOST_FIBERS_SINGLE_CORE]
[allways call `std::this_thread::yield()` without backoff during busy wait
(apllies on to `XCHG`-spinlock)]
[always call `std::this_thread::yield()` without backoff during busy wait
(applies on to `XCHG`-spinlock)]
]
]

Expand Down
8 changes: 4 additions & 4 deletions doc/speculative.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,21 @@
]

[#speculation]
[section:speculation Specualtive execution]
[section:speculation Speculative execution]

[heading Hardware transactional memory]

With help of hardware transactional memory multiple logical processors
execute a critical region speculatively, e.g. without explicit
synchronization.[br]
If the transactional execution completes successfully, then all memory
operations performed within the transactional region are commited without any
operations performed within the transactional region are committed without any
inter-thread serialization.[br]
When the optimistic execution fails, the processor aborts the transaction and
discards all performed modifications.[br]
In non-transactional code a single lock serializes the access to a critical
region. With a transactional memory, multiple logical processor start a
transaction and update the memory (the data) inside the ciritical region.
transaction and update the memory (the data) inside the critical region.
Unless some logical processors try to update the same data, the transactions
would always succeed.

Expand All @@ -30,7 +30,7 @@ would always succeed.
TSX is Intel's implementation of hardware transactional memory in modern Intel
processors[footnote intel.com: [@https://software.intel.com/en-us/node/695149
Intel Transactional Synchronization Extensions]].[br]
In TSX the hardware keeps track of which cachelines have been read from and
In TSX the hardware keeps track of which cache-lines have been read from and
which have been written to in a transaction. The cache-line size (64-byte) and
the n-way set associative cache determine the maximum size of memory in a
transaction. For instance if a transaction modifies 9 cache-lines at a
Expand Down
2 changes: 1 addition & 1 deletion doc/stack.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ as stack space which suppresses the errors.

Sanitizers (GCC/Clang) are confused by the stack switches.
The library (and Boost.Context too) is required to be compiled with property (b2 command-line)
`context-impl=ucontext` and compilers santizer options.
`context-impl=ucontext` and compilers sanitizer options.
Users must define `BOOST_USE_ASAN` before including any Boost.Context headers
when linking against Boost binaries.

Expand Down
12 changes: 6 additions & 6 deletions doc/tuning.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ and/or fibers are not synchronized between threads.
Memory allocation algorithm is significant for performance in a multithreaded
environment, especially for __boost_fiber__ where fiber stacks are allocated on
the heap. The default user-level memory allocator (UMA) of glibc is ptmalloc2
but it can be replaced by another UMA that fit better for the concret work-load
but it can be replaced by another UMA that fit better for the concrete work-load
For instance Google[s]
[@http://goog-perftools.sourceforge.net/doc/tcmalloc.html TCmalloc] enables a
better performance at the ['skynet] microbenchmark than glibc[s] default memory
Expand All @@ -35,7 +35,7 @@ cooperatively, rather than preemptively.
Depending on the work-load several strategies of scheduling the fibers are
possible [footnote 1024cores.net:
[@http://www.1024cores.net/home/scalable-architecture/task-scheduling-strategies Task Scheduling Strategies]]
that can be implmented on behalf of __algo__.
that can be implemented on behalf of __algo__.

* work-stealing: ready fibers are hold in a local queue, when the
fiber-scheduler's local queue runs out of ready fibers, it randomly
Expand All @@ -51,11 +51,11 @@ that can be implmented on behalf of __algo__.
concurrently push and pop ready fibers to/from the global queue
(implemented in __shared_work__)

* work-distribution: fibers that became ready are proactivly distributed to
* work-distribution: fibers that became ready are proactively distributed to
idle fiber-schedulers or fiber-schedulers with low load

* work-balancing: a dedicated (helper) fiber-scheduler periodically collects
informations about all fiber-scheduler running in other threads and
information about all fiber-scheduler running in other threads and
re-distributes ready fibers among them


Expand Down Expand Up @@ -121,12 +121,12 @@ are disabled.]

Modern multi-socket systems are usually designed as [link numa NUMA systems].
A suitable fiber scheduler like __numa_work_stealing__ reduces
remote memory access (latence).
remote memory access (latency).


[heading Parameters]

[table Parameters that migh be defiend at compiler's command line
[table Parameters that might be defined at compiler's command line
[
[Parameter]
[Default value]
Expand Down
2 changes: 1 addition & 1 deletion doc/unbuffered_channel.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

[section:unbuffered_channel Unbuffered Channel]

__boost_fiber__ provides template `unbuffered_channel` suitable to synchonize
__boost_fiber__ provides template `unbuffered_channel` suitable to synchronize
fibers (running on same or different threads) via synchronous message passing.
A fiber waiting to consume an value will block until the value is produced.
If a fiber attempts to send a value through an unbuffered channel and no fiber
Expand Down