Project Soon

Experiment

An experiment in Computer Science usually is a small trivial program that runs a larger complex program.

There are tons of ways to execute an experiment. Most end up checking how long it takes to run a specific task, and compare between different algorithms or change the size of data. Another is the amount of occurrences of a specific event. Most of the time it is difficult to determine what metrics to measure, so you end up taking several of them in different locations and later create graphs which you can pick whether you think they are useful for your study.

Due to this, I thought up several ways to create metrics that I can later use to weigh my research question.

The first two are useful to make sure that batching tasks will actually improve overall performance by reducing overhead. Lock miss is a general value to determine if a high-core usage may result in locking issues that we are trying to circumvent. Empty miss is more a verification that batching may increase misses and unevenly distribute tasks over several queues.

Could be more metrics to find, but these are the ones I found interesting. If the first two are barely noticeably different, the last two could give a better proof of their significant difference.