Oracle 5.0 Reference Manual page 697

Table of Contents

Advertisement

3. When the buffer gets full, run a qsort (quicksort) on it and store the result in a temporary file. Save a
pointer to the sorted block. (If all pairs fit into the sort buffer, no temporary file is created.)
4. Repeat the preceding steps until all rows have been read.
5. Do a multi-merge of up to
until all blocks from the first file are in the second file.
6. Repeat the following until there are fewer than
7. On the last multi-merge, only the pointer to the row (the last part of the sort key) is written to a result
file.
8. Read the rows in sorted order by using the row pointers in the result file. To optimize this, we read
in a big block of row pointers, sort them, and use them to read the rows in sorted order into a row
buffer. The size of the buffer is the value of the
The code for this step is in the
One problem with this approach is that it reads rows twice: One time when evaluating the
clause, and again after sorting the pair values. And even if the rows were accessed successively the
first time (for example, if a table scan is done), the second time they are accessed randomly. (The sort
keys are ordered, but the row positions are not.)
The modified
filesort
key value and row position, but also the columns required for the query. This avoids reading the rows
twice. The modified
filesort
1. Read the rows that match the
2. For each row, record a tuple of values consisting of the sort key value and row position, and also
the columns required for the query.
3. Sort the tuples by sort key value
4. Retrieve the rows in sorted order, but read the required columns directly from the sorted tuples
rather than by accessing the table a second time.
Using the modified
filesort
the original method, and fewer of them fit in the sort buffer (the size of which is given by
sort_buffer_size
slower, not faster. To avoid a slowdown, the optimization is used only if the total size of the extra
columns in the sort tuple does not exceed the value of the
system variable. (A symptom of setting the value of this variable too high is that you should see high
disk activity and low CPU activity.)
For slow queries for which
max_length_for_sort_data
If you want to increase
than an extra sorting phase. If this is not possible, you can try the following strategies:
• Increase the size of the
• Increase the size of the
• Use less RAM per row by declaring columns only as large as they need to be to hold the values
stored in them. For example,
characters.
• Change
[502]
tmpdir
this option accepts several paths that are used in round-robin fashion, so you can use this feature
Optimizing
SELECT
(7) regions to one block in another temporary file. Repeat
MERGEBUFF
sql/records.cc
algorithm incorporates an optimization such that it records not only the sort
algorithm works like this:
clause.
WHERE
algorithm, the tuples are longer than the pairs used in
[493]). As a result, it is possible for the extra I/O to make the modified approach
is not used, you might try lowering
filesort
[470]
to a value that is appropriate to trigger a filesort.
speed, check whether you can get MySQL to use indexes rather
ORDER BY
sort_buffer_size
read_rnd_buffer_size
is better than
CHAR(16)
to point to a dedicated file system with large amounts of free space. Also,
677
Statements
(15) blocks left.
MERGEBUFF2
read_rnd_buffer_size
source file.
max_length_for_sort_data
[493]
variable.
[489]
variable.
if values never exceed 16
CHAR(200)
[489]
system variable.
WHERE
[470]

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the 5.0 and is the answer not in the manual?

Questions and answers

This manual is also suitable for:

Mysql 5.0

Table of Contents