12-07-2011, 01:33 AM

Looks like support for these is rather minimal.

If at some point in the future that changes, I can offer some implementation suggestions.

1) Selecting a GPU load percentage seems fairly trivial.

At startup, after retrieving the GPU capabilites (# of shaders), simply scale and use that number.

All this assumes is that # of shaders is an input to some function in the application.

2) Optimizing "workload parameters" is worthy goal only if they have a significant effect on efficiency.

For this to work, it would be best if they can be formulated into a linear equation with independent parameters, a convex solution (no local maximums), with Hashes/s as the output.

After reaching something of a steady state, "tweak" each parameter (up and down) to maximize Hashes/s.

A separate thread can handle this, but it assumes that the parameters can be adjusted after startup.

3) Scaling back a GPU so it doesn't exceed a maximum temperature is a simple feedback and control system, similar to the heater thermostat on your wall.

This would require a monitoring thread (in Java I'd implement it as a repeating timer that fires an event every 0.5s or so).

If the maximum temperature is approached, scale back the number of available shaders.

As before, it assumes that the application architecture uses something like a job queue, where the number of workers (i.e. shaders) can be changed at run time.

And it assumes that the temperature reading from the GPU is accurate.

My $0.02

Thanks for your consideration.

If at some point in the future that changes, I can offer some implementation suggestions.

1) Selecting a GPU load percentage seems fairly trivial.

At startup, after retrieving the GPU capabilites (# of shaders), simply scale and use that number.

All this assumes is that # of shaders is an input to some function in the application.

2) Optimizing "workload parameters" is worthy goal only if they have a significant effect on efficiency.

For this to work, it would be best if they can be formulated into a linear equation with independent parameters, a convex solution (no local maximums), with Hashes/s as the output.

After reaching something of a steady state, "tweak" each parameter (up and down) to maximize Hashes/s.

A separate thread can handle this, but it assumes that the parameters can be adjusted after startup.

3) Scaling back a GPU so it doesn't exceed a maximum temperature is a simple feedback and control system, similar to the heater thermostat on your wall.

This would require a monitoring thread (in Java I'd implement it as a repeating timer that fires an event every 0.5s or so).

If the maximum temperature is approached, scale back the number of available shaders.

As before, it assumes that the application architecture uses something like a job queue, where the number of workers (i.e. shaders) can be changed at run time.

And it assumes that the temperature reading from the GPU is accurate.

My $0.02

Thanks for your consideration.