Accelerating sprites is a very bad example, as ViewFinder shows with what it can accelerate. Because the sprite colour depth and the screen colour depth do not need to match, and similarly for the palette, it is very unlikely that any graphics card will provide quite what RISC OS would need.
As for optimising the horizontal line drawing code via the graphics card, this is also unlikely to yield any optimisation - in fact it would probably degrade performance due to the overhead of actually calling the graphics card for each scanline of a shape. Rectangle fills and copies, however, are a totally different story.
Personally, I'd like to see the current RISC OS sprite format deprecated and a replacement implemented that removes a lot of the complications that the current format allows (dropping palettes, aspect ratios and different colour depths would simplify things a lot). Let's face it, if we had a nice compressed format that was decompressed upon loading you'd only lose memory when loaded, you could make it allow more than 12 characters in the name (which would make things better for applications that use long filenames) and you could have an arbituary format specifically designed to allow graphics cards to accellerate them.
Unfortunately, dropping palettes and screen mode information would break a lot of code that outputs to sprite. Not to mention the fact that the ability of the OS to draw things to screen any quicker is a lot less important to me than OS development on threading, pre-emptive multitasking or even just the ability to have native alpha-blended sprites on anything other than RISC OS Select so it can actually be used by application writers.
Whilst clever code is good, code that gets used is better.