Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
home:Ledest:erlang:24
erlang
0229-Fix-typos-in-erts-emulator-internal_doc.patch
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File 0229-Fix-typos-in-erts-emulator-internal_doc.patch of Package erlang
From c2a320a122e3c73eb93a9ac05d2fce9d45702196 Mon Sep 17 00:00:00 2001 From: "Kian-Meng, Ang" <kianmeng@cpan.org> Date: Wed, 24 Nov 2021 13:06:14 +0800 Subject: [PATCH] Fix typos in erts/emulator/internal_doc/ --- erts/emulator/internal_doc/BeamAsm.md | 2 +- erts/emulator/internal_doc/CodeLoading.md | 2 +- erts/emulator/internal_doc/DelayedDealloc.md | 4 ++-- erts/emulator/internal_doc/GarbageCollection.md | 4 ++-- erts/emulator/internal_doc/PTables.md | 8 ++++---- erts/emulator/internal_doc/PortSignals.md | 4 ++-- erts/emulator/internal_doc/SuperCarrier.md | 14 +++++++------- erts/emulator/internal_doc/ThreadProgress.md | 6 +++--- erts/emulator/internal_doc/Tracing.md | 4 ++-- erts/emulator/internal_doc/beam_makeops.md | 16 ++++++++-------- erts/emulator/internal_doc/dec.erl | 2 +- 11 files changed, 33 insertions(+), 33 deletions(-) diff --git a/erts/emulator/internal_doc/BeamAsm.md b/erts/emulator/internal_doc/BeamAsm.md index 8a74e22ee5..53b47f52a5 100644 --- a/erts/emulator/internal_doc/BeamAsm.md +++ b/erts/emulator/internal_doc/BeamAsm.md @@ -391,7 +391,7 @@ you what you want. ### Annotate perf functions -If you want to be able to use the `perf annotate` functionality (and in extention +If you want to be able to use the `perf annotate` functionality (and in extension the annotate functionality in the `perf report` gui) you need to use a monotonic clock when calling `perf record`, i.e. `perf record -k mono`. So for a dialyzer run you would do this: diff --git a/erts/emulator/internal_doc/CodeLoading.md b/erts/emulator/internal_doc/CodeLoading.md index cfdc1bf30a..fa5bba0643 100644 --- a/erts/emulator/internal_doc/CodeLoading.md +++ b/erts/emulator/internal_doc/CodeLoading.md @@ -56,7 +56,7 @@ different modules and returns a "magic binary" containing the internal state of each prepared module. Function `finish_loading` could take a list of such states and do the finishing of all of them in one go. -Currenlty we use the legacy BIF `erlang:load_module` which is now +Currently we use the legacy BIF `erlang:load_module` which is now implemented in Erlang by calling the above two functions in sequence. Function `finish_loading` is limited to only accepts a list with one module state as we do not yet use the multi module loading diff --git a/erts/emulator/internal_doc/DelayedDealloc.md b/erts/emulator/internal_doc/DelayedDealloc.md index 4b7c774141..8a86a70b10 100644 --- a/erts/emulator/internal_doc/DelayedDealloc.md +++ b/erts/emulator/internal_doc/DelayedDealloc.md @@ -89,7 +89,7 @@ same location in memory. The head contains pointers to beginning of the list (`head.first`), and to the first block which other threads may refer to -(`head.unref_end`). Blocks between these pointers are only refered to +(`head.unref_end`). Blocks between these pointers are only referred to by the head part of the data structure which is only used by the thread owning the allocator instance. When these two pointers are not equal the thread owning the allocator instance deallocate block after @@ -137,7 +137,7 @@ If no new memory blocks are inserted into the list, it should eventually be emptied. All pointers to the list however expect to always point to something. This is solved by inserting an empty "marker" element, which only has to purpose of being there in the -absense of other elements. That is when the list is empty it only +absence of other elements. That is when the list is empty it only contains this "marker" element. ### Contention ### diff --git a/erts/emulator/internal_doc/GarbageCollection.md b/erts/emulator/internal_doc/GarbageCollection.md index a1627b3233..ae8602bd68 100644 --- a/erts/emulator/internal_doc/GarbageCollection.md +++ b/erts/emulator/internal_doc/GarbageCollection.md @@ -35,7 +35,7 @@ Compiling this code to beam assembly (`erlc -S`) shows exactly what is happening Looking at the assembler code we can see three things: The heap requirement in this function turns out to be only six words, as seen by the `{test_heap,6,1}` instruction. All the allocations are combined to a single instruction. The bulk of the data `{text, "hello world!"}` is a *literal*. Literals, sometimes referred to as constants, are not allocated in the function since they are a part of the module and allocated at load time. -If there is not enough space available on the heap to satisfy the `test_heap` instructions request for memory, then a garbage collection is initiated. It may happen immediately in the `test_heap` instruction, or it can be delayed until a later time depending on what state the process is in. If the garbage collection is delayed, any memory needed will be allocated in heap fragments. Heap fragments are extra memory blocks that are a part of the young heap, but are not allocated in the contigious area where terms normally reside. See [The young heap](#the-young-heap) for more details. +If there is not enough space available on the heap to satisfy the `test_heap` instructions request for memory, then a garbage collection is initiated. It may happen immediately in the `test_heap` instruction, or it can be delayed until a later time depending on what state the process is in. If the garbage collection is delayed, any memory needed will be allocated in heap fragments. Heap fragments are extra memory blocks that are a part of the young heap, but are not allocated in the contiguous area where terms normally reside. See [The young heap](#the-young-heap) for more details. ### The collector @@ -171,7 +171,7 @@ There are a bunch of different tradeoffs that come into play when trying to figu Using `off_heap` may seem like a nice way to get a more scalable system as you get very little contention on the main locks, however, allocating a heap fragment is more expensive than allocating on the heap of the receiving process. So if it is very unlikely that contention will occur, it is more efficient to try to allocate the message directly on the receiving process' heap. -Using `on_heap` will force all messages to be part of on the young heap which will increase the amount of data that the garbage collector has to move. So if a garbage collection is triggered while processing a large amount of messages, they will be copied to the young heap. This in turn will lead to that the messages will quickly be promoted to the old heap and thus increase its size. This may be good or bad depending on exactly what the process does. A large old heap means that the young heap will also be larger, which in turn means that less garbage collections will be triggered while processing the message queue. This will temporarly increase the throughput of the process at the cost of more memory usage. However, if after all the messages have been consumed the process enters a state where a lot less messages are being received. Then it may be a long time before the next fullsweep garbage collection happens and the messages that are on the old heap will be there until that happens. So while `on_heap` is potentially faster than the other modes, it uses more memory for a longer time. This mode is the legacy mode which is almost how the message queue was handled before Erlang/OTP 19.0. +Using `on_heap` will force all messages to be part of on the young heap which will increase the amount of data that the garbage collector has to move. So if a garbage collection is triggered while processing a large amount of messages, they will be copied to the young heap. This in turn will lead to that the messages will quickly be promoted to the old heap and thus increase its size. This may be good or bad depending on exactly what the process does. A large old heap means that the young heap will also be larger, which in turn means that less garbage collections will be triggered while processing the message queue. This will temporarily increase the throughput of the process at the cost of more memory usage. However, if after all the messages have been consumed the process enters a state where a lot less messages are being received. Then it may be a long time before the next fullsweep garbage collection happens and the messages that are on the old heap will be there until that happens. So while `on_heap` is potentially faster than the other modes, it uses more memory for a longer time. This mode is the legacy mode which is almost how the message queue was handled before Erlang/OTP 19.0. Which one of these strategies is best depends a lot on what the process is doing and how it interacts with other processes. So, as always, profile the application and see how it behaves with the different options. diff --git a/erts/emulator/internal_doc/PTables.md b/erts/emulator/internal_doc/PTables.md index ef61963a40..6b316eaa7e 100644 --- a/erts/emulator/internal_doc/PTables.md +++ b/erts/emulator/internal_doc/PTables.md @@ -113,7 +113,7 @@ the "thread progress" functionality in order to determine when it is safe to deallocate the process structure. We'll get back to this when describing deletion in the table. -Using this new lookup approach we wont modify any memory at all which +Using this new lookup approach we won't modify any memory at all which is important. A lookup conceptually only read memory, now this is true in the implementation also which is important from a scalability perspective. The previous implementation modified the cache line @@ -282,7 +282,7 @@ single cache line containing the state of the rwlock even in the case we are only read locking. Instead of using such an rwlock, we have our own implementation of reader optimized rwlocks which keeps track of reader threads in separate thread specific cache lines. This in order -to avoid contention on a singe cache line. As long as we only do read +to avoid contention on a single cache line. As long as we only do read lock operations, threads only need to read a global cache line and modify its own cache line, and by this minimize communication between involved processors. The iterating BIFs are normally very infrequently @@ -299,7 +299,7 @@ threads modify the table at the same time as we are trying to find the slot. The easy fix is to abort the operation if an empty slot could not be found in a finite number operation, and then restart the operation under a write lock. This will be implemented in next -release, but furter work should be made trying to find a better +release, but further work should be made trying to find a better solution. This and also previous implementation do not work well when the table @@ -320,7 +320,7 @@ not require exclusive access to the table while reading a sequence of slots. In principle this should be rather easy, the code can handle sequences of variable sizes, so shrinking the sequence size of slots to one would solv the problem. This will, however, need some tweeks -and modifications of not trival code, but is something that should be +and modifications of not trivial code, but is something that should be looked at in the future. By increasing the size of identifiers, at least on 64-bit machines diff --git a/erts/emulator/internal_doc/PortSignals.md b/erts/emulator/internal_doc/PortSignals.md index 8782ae4e17..f2490152ca 100644 --- a/erts/emulator/internal_doc/PortSignals.md +++ b/erts/emulator/internal_doc/PortSignals.md @@ -108,7 +108,7 @@ and a private, lock free, queue like, task data structure. This "semi locked" approach is similar to how the message boxes of processes are managed. The lock is port specific and only used for protection of port tasks, so the run queue lock is now needed in more or less the -same way for ports as for processes. This ensures that we wont see an +same way for ports as for processes. This ensures that we won't see an increased lock contention on run queue locks due to this rewrite of the port functionality. @@ -211,7 +211,7 @@ consuming, and did not really depend on the port. That is we would like to do this without having the port lock locked. In order to improve this, state information was re-organized in the -port structer, so that we can access it using atomic memory +port structure, so that we can access it using atomic memory operations. This together with the new port table implementation, enabled us to lookup the port and inspect the state before acquiring the port lock, which in turn made it possible to perform preparations diff --git a/erts/emulator/internal_doc/SuperCarrier.md b/erts/emulator/internal_doc/SuperCarrier.md index f52c6613d5..55ac5a67af 100644 --- a/erts/emulator/internal_doc/SuperCarrier.md +++ b/erts/emulator/internal_doc/SuperCarrier.md @@ -12,7 +12,7 @@ Problem ------- The initial motivation for this feature was customers asking for a way -to pre-allocate physcial memory at VM start for it to use. +to pre-allocate physical memory at VM start for it to use. Other problems were different experienced limitations of the OS implementation of mmap: @@ -29,7 +29,7 @@ fragmentation increased. Solution -------- -Allocate one large continious area of address space at VM start and +Allocate one large continuous area of address space at VM start and then use that area to satisfy our dynamic memory need during runtime. In other words: implement our own mmap. @@ -70,7 +70,7 @@ name suggest that it can be viewed as our own mmap implementation. A super carrier needs to satisfy two slightly different kinds of allocation requests; multi block carriers (MBC) and single block -carriers (SBC). They are both rather large blocks of continious +carriers (SBC). They are both rather large blocks of continuous memory, but MBCs and SBCs have different demands on alignment and size. @@ -79,13 +79,13 @@ alignment. MBCs are more restricted. They can only have a number of fixed sizes that are powers of 2. The start address need to have a very -large aligment (currently 256 kb, called "super alignment"). This is a +large alignment (currently 256 kb, called "super alignment"). This is a design choice that allows very low overhead per allocated block in the MBC. To reduce fragmentation within the super carrier, it is good to keep SBCs and MBCs apart. MBCs with their uniform alignment and sizes can be -packed very efficiently together. SBCs without demand for aligment can +packed very efficiently together. SBCs without demand for alignment can also be allocated quite efficiently together. But mixing them can lead to a lot of memory wasted when we need to create large holes of padding to the next alignment limit. @@ -102,9 +102,9 @@ The MBC area is called *sa* as in super aligned and the SBC area is called *sua* as in super un-aligned. Note that the "super" in super alignment and the "super" in super -carrier has nothing to do with each other. We could have choosen +carrier has nothing to do with each other. We could have chosen another naming to avoid confusion, such as "meta" carrier or "giant" -aligment. +alignment. +-------+ <---- sua.top | sua | diff --git a/erts/emulator/internal_doc/ThreadProgress.md b/erts/emulator/internal_doc/ThreadProgress.md index 03a802f904..a48b250104 100644 --- a/erts/emulator/internal_doc/ThreadProgress.md +++ b/erts/emulator/internal_doc/ThreadProgress.md @@ -78,7 +78,7 @@ thread progress operation has been initiated, and at least once ordered using communication via memory which makes it possible to draw conclusion about the memory state after the thread progress operation has completed. Lets call the progress made from initiation to -comletion for "thread progress". +completion for "thread progress". Assuming that the thread progress functionality is efficient, a lot of algorithms can both be simplified and made more efficient than using @@ -120,7 +120,7 @@ communication. We also want threads to be able to determine when thread progress has been made relatively fast. That is we need to have some balance -between comunication overhead and time to complete the operation. +between communication overhead and time to complete the operation. ### API ### @@ -222,7 +222,7 @@ current global value plus one at the time when we call confirmed global value plus two at this time. The above described implementation more or less minimizes the -comunication needed before we can increment the global counter. The +communication needed before we can increment the global counter. The amount of communication in the system due to the thread progress functionality however also depend on the frequency with which managed threads call `erts_thr_progress_update()`. Today each scheduler thread diff --git a/erts/emulator/internal_doc/Tracing.md b/erts/emulator/internal_doc/Tracing.md index d81739c7cb..f0182daad8 100644 --- a/erts/emulator/internal_doc/Tracing.md +++ b/erts/emulator/internal_doc/Tracing.md @@ -106,7 +106,7 @@ instantaneously without the need of external function calls. The chosen solution is instead for tracing to use the technique of replication applied on the data structures for breakpoints. Two -generations of breakpoints are kept and indentified by index of 0 and +generations of breakpoints are kept and identified by index of 0 and 1\. The global atomic variables `erts_active_bp_index` will determine which generation of breakpoints running code will use. @@ -236,7 +236,7 @@ value of `erts_active_bp_index` at different times as it is read without any memory barrier. But this is the best we can do without more expensive thread synchronization. -The waiting in step 8 is to make sure we dont't restore the original +The waiting in step 8 is to make sure we don't restore the original bream instructions for disabled breakpoints until we know that no thread is still accessing the old enabled part of a disabled breakpoint. diff --git a/erts/emulator/internal_doc/beam_makeops.md b/erts/emulator/internal_doc/beam_makeops.md index 578858d686..b2ef0e8299 100644 --- a/erts/emulator/internal_doc/beam_makeops.md +++ b/erts/emulator/internal_doc/beam_makeops.md @@ -77,7 +77,7 @@ following line: 64: move/2 This is a definition of an external generic BEAM instruction. Most -importantly it specifices that the opcode is 64. It also defines that +importantly it specifies that the opcode is 64. It also defines that it has two operands. The BEAM assembler will use the opcode when creating `.beam` files. The compiler does not really need the arity, but it will use it as an internal sanity check when assembling the @@ -122,7 +122,7 @@ layout for the instruction `{move,{atom,id},{x,5}}`: +--------------------+--------------------+ This example and all other examples in the document assumes a 64-bit -archictecture, and furthermore that pointers to C code fit in 32 bits. +architecture, and furthermore that pointers to C code fit in 32 bits. `I` in the BEAM virtual machine is the instruction pointer. When BEAM executes an instruction, `I` points to the first word of the @@ -322,7 +322,7 @@ in the code area for the module being loaded. * The loader translates each operand to a machine word and stores it in the code area. The operand type for the selected specific instruction guides the translation. For example, if the type is `e`, -the value of the operand is an index into an arry of external +the value of the operand is an index into an array of external functions and will be translated to a pointer to the export entry for the function to call. If the type is `x`, the number of the X register will be multiplied by the word size to produce a byte offset. @@ -382,7 +382,7 @@ The following output files will be generated in the output directory. instructions (including how to pack their operands), and transformation rules are all part of this file. -* `beam_opcodes.h` - Miscellanous preprocessor definitions, mainly +* `beam_opcodes.h` - Miscellaneous preprocessor definitions, mainly used by `beam_load.c` but also by `beam_{hot,warm,cold}.h`. * `beam_transform.c` - Implementation of guard constraints and generators @@ -726,18 +726,18 @@ register as a port. Therefore the literal term must not contain a port or pid.) * `S` - Tagged source register (X or Y). The tag will be tested at -runtime to retrieve the value from an X register or a Y register. Slighly +runtime to retrieve the value from an X register or a Y register. Slightly cheaper than `s`. * `d` - Tagged destination register (X or Y). The tag will be tested at runtime to set up a pointer to the destination register. If the -instruction performs a garbarge collection, it must use the +instruction performs a garbage collection, it must use the `$REFRESH_GEN_DEST()` macro to refresh the pointer before storing to it (there are more details about that in a later section). * `j` - A failure label (combination of `f` and `p`). If the branch target 0, an exception will be raised if instruction fails, otherwise control will be -transfered to the target address. +transferred to the target address. The types that follows are all applied to an operand that has the `u` type. @@ -1616,7 +1616,7 @@ of the instruction, a pointer will be initialized to point to the X or Y register in question. If there is a garbage collection before the result is stored, -the stack will move and if the `d` operand refered to a Y +the stack will move and if the `d` operand referred to a Y register, the pointer will no longer be valid. (Y registers are stored on the stack.) diff --git a/erts/emulator/internal_doc/dec.erl b/erts/emulator/internal_doc/dec.erl index 8ce83fa402..52ab42ebc0 100644 --- a/erts/emulator/internal_doc/dec.erl +++ b/erts/emulator/internal_doc/dec.erl @@ -24,7 +24,7 @@ %% The C header is generated from a text file containing tuples in the %% following format: %% {RevList,Translation} -%% Where 'RevList' is a reversed list of the denormalized repressentation of +%% Where 'RevList' is a reversed list of the denormalized representation of %% the character 'Translation'. An example would be the swedish character %% 'รถ', which would be represented in the file as: %% {[776,111],246}, as the denormalized representation of codepoint 246 -- 2.31.1
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor