232k views
0 votes
The purpose of performance profiling (experimental analysis) is to measure the program/algorithm performance. Assume that you have performed an experimental analysis of an algorithm, and profiling returned the following

times: for instances of sizes 2^10, 2^11, 2^12, 2^13, 2^14
the corresponding times are approximately 300 , 510 , 905 , 1750 , 3305 milliseconds.
In these collected running times, there is an extra overhead resulting from clocking program performance and OS functions calls, and this overhead has to be considered in analyzing data. That is, the measured time consists of two components: the time for executing the operations, plus the overhead from profiling.
The estimated overhead due to profiling is about 100 milliseconds independently of the input size. (that is, 100 is always an additive ``constant'' in the collected times).
Based on these experimental results, what is the running time, as a function of the problem size, for executing this algorithm? (c is a constant).
A. O(underroot(n)).
B. O(n^2).
C. O(n).
D. None of the listed.
E. O(n log n).

User Ian Colton
by
5.2k points

1 Answer

1 vote

Answer:

C. O(n).

Step-by-step explanation:

for better understanding, the os is a program that helps in telling how much resources can be used and when processor can be accessed.

to each programme data structures would load and unload every time that a programme is to be run.

while a programme is being managed there would be extra overheads and based on whatever type of os or programme that this is, it may or may not have fixed overhead

from our question overhead is fixed at 100ms

we have,

net time as: 300-100, 510-100, 905-100, 1750-100, 3305-100,

which is, 200, 410, 805, 1650, 3205

when we double 2¹ to 2¹¹ to 2¹² etc, the programme ran is also doubling

this says that run time is same as size of instance n

O(n)

User Arnell
by
4.6k points