forked from luck/tmp_suning_uos_patched
b916c5cd4d
Usually a single IO is confined to one group of devices (group_width) and at the boundary of a raid group it can spill into a second group. Current code would allocate a full device_table size array at each io_state so it can comply to requests that span two groups. Needless to say that is very wasteful, specially when device_table count can get very large (hundreds even thousands), while a group_width is usually 8 or 10. * Change ore API to trim on IO that spans two raid groups. The user passes offset+length to ore_get_rw_state, the ore might trim on that length if spanning a group boundary. The user must check ios->length or ios->nrpages to see how much IO will be preformed. It is the responsibility of the user to re-issue the reminder of the IO. * Modify exofs To copy spilled pages on to the next IO. This means one last kick is needed after all coalescing of pages is done. Signed-off-by: Boaz Harrosh <bharrosh@panasas.com> |
||
---|---|---|
.. | ||
BUGS | ||
common.h | ||
dir.c | ||
exofs.h | ||
file.c | ||
inode.c | ||
Kbuild | ||
Kconfig | ||
namei.c | ||
ore.c | ||
super.c | ||
symlink.c |