“Optimistic locking” is a well-established and widespread terminology though, not the least due to its catchiness. The more factual, but unwieldy term is “optimistic concurrency control”.
I agree that it is widespread, but whenever you see such illogical terms you have to wonder whether the authors who use them do not understand what they are really doing or they understand, but they succumb to conforming with a widespread inappropriate usage in the hope to be better understood by naive readers.
Understanding the difference between pessimistic access (mutual exclusion implemented by locking) and optimistic access (concurrent accesses with retries when necessary) is absolutely critical for the correct and efficient implementation of algorithms that use shared data structures.
It is frequent to combine both methods in an algorithm, like here, but in English that is not "optimistic locking", but at most "locking and optimistic access" or "optimism and locking".
Pessimistic access means that you expect that another process will attempt to access concurrently the shared data, so you must use a lock to prevent this. Optimistic access means that you expect that no other process will attempt to access concurrently the shared data, so you may proceed to access it immediately, but then you must have some means to detect that your assumption has been wrong and another process has interfered, when the transaction must be retried.
Depending on the application, either pessimistic access or optimistic access results in a better performance, neither is always better than the other. Optimistic access (lock-free access) makes better the best case, but it makes much worse the worst case. Depending on the frequency distribution of such cases optimistic access increases or decreases the performance.
Pessimistic access and optimistic access have the advantage of being applicable to any kind of shared data structure, but dynamic data partitioning, where applicable, like for this shared B-tree example, which can be partitioned in sub-trees accessed concurrently, normally results in better performance than both pessimistic access and optimistic access, by being deterministic and avoiding both locking and retries. Dynamic data partitioning may require locking for a very short time in order to partition the shared resource before accessing the allocated part, though the mutual exclusion provided by atomic instructions may be sufficient for this purpose. It is frequent that an atomic fetch-and-add instruction is enough to partition a shared data structure, like a shared array or a shared message queue, between concurrent processes attempting to access it.