C++ OOP Performance Tips–Part 1

We often encounter inheritance and composition implementations that are too flexible and too generic for the problem domain. They may perform computations that are rarely or never required. In practice, it is not surprising to discover performance overhead associated with inheritance and composition. This is a tradeoff between code reuse and performance. Oftentimes, reusable code will
compute things you don’t really need in a specific scenario. Any time you call functions that do more than you really need, you will take a performance hit. This article will highlight some of these scenarios.

This is not to say that inheritance is fundamentally a performance obstacle. We must make a distinction between the overall computational cost, required cost, and computational penalty. The overall computational cost is the set of all instructions executed in a computation. The required cost is that subset of instructions whose results are necessary. This part of the computation is mandatory; computational penalty = overall computational costrequired cost. This is the part of the computation that could have been eliminated by an alternative design or implementation. (e.g inheritance hierarchy, composition layout, etc…)

So we cannot make a blanket statement that complex inheritance designs are necessarily bad, nor do they always carry a performance penalty. All we can say is that overall cost grows with the size of the derivation tree. If all those computations are valuable then it is all required cost. In practice, inheritance hierarchies are not likely to be perfect. In that case, they are likely to impose a computational penalty.

Constructors and Destructors

Following is a snippet from Visual Studio 2008 Disassembly Window for a simple Foo() function creating Point object locally on the stack and destroying it on returning.

class Point
{
public:
    Point() : X(0), Y(0) { cout << "Hello!\n"; }
    ~Point() { cout << "Bye bye\n"; }
    int X;
    int Y;
};

void Foo()
{
010820B0  push        ebp  
010820B1  mov         ebp,esp 
010820B3  sub         esp,8 
    Point obj;
010820B6  lea         ecx,[obj] // Pass obj address to the constructor 
010820B9  call        Point::Point (10813ACh) // Call Constructor 
}
010820BE  lea         ecx,[obj] // Pass obj address to the destructor
010820C1  call        Point::~Point (10813B1h) // Call Destructor
010820C6  xor         eax,eax 
010820C8  mov         esp,ebp 
010820CA  pop         ebp  
010820CB  ret             

The translated assembly instructions of each C++ statement (in black) appears directly below it. As you can see, constructors and destructors (in Red) are normal functions like any C++ function. The compiler knows exactly where and when to call them. A constructor or destructor call costs 2 assembly instructions.

The following is the code generated for an empty Point constructor, total of 8 assembly instructions.

class Point
{
public:
    Point() 
00E12960  push        ebp  
00E12961  mov         ebp,esp 
00E12963  push        ecx  
00E12964  mov         dword ptr [ebp-4],ecx 
    {
        
    }
00E12967  mov         eax,dword ptr [this] 
00E1296A  mov         esp,ebp 
00E1296C  pop         ebp  
00E1296D  ret   

The following is the code generated for an empty Point destructor, total of 7 assembly instructions.

    ~Point()
    {
00E12580  push        ebp  
00E12581  mov         ebp,esp 
00E12583  push        ecx  
00E12584  mov         dword ptr [ebp-4],ecx   
    }
00E12587  mov         esp,ebp 
00E12589  pop         ebp  
00E1258A  ret        

Let’s calculate a base-line for the number of assembly instructions required to construct/destruct a Point object:

  1. 2 instructions to call Point constructor Point::Point().
  2. 8 instructions to implement Point empty constructor body.
  3. 2 instructions to call Point destructor Point:~Point()
  4. 7 instructions to implement Point empty destructor body.
  5. Total of 2 + 8 + 2 + 7 = 19 instruction to construct/destruct a Point object!
    Enough theory, let’s get our hands dirty and write some code and figure out the effect of these extra 19 instruction.
    Consider the code snippet below (Version 0):
// Version 0
Point dummy
; for (int i = 0; i < 100000000; ++i) { Point p; p.X = p.Y = i; dummy.X = p.X; dummy.Y = p.Y; }

The above code doesn’t make sense in a real-life program, and it is meaningless, but it will help illustrate our theory.  We will focus only on the Point object construction/destruction. As you can see, each iteration costs 19 instruction (mentioned up) per Point construction/destruction, and the overall cost for construction/destruction in the previous for-loop is 20 multiplied by the number f iterations which is 100 million ~= 1900000000 instruction only for Point construction/destruction.  Note that we didn’t calculate the instructions required for p and dummy objects members initialization because they are irrelevant to our case-study. We care only for Point construction/destruction.

A make-sense optimization for the previous for loop is to bring the Point p; line outside the for loop so that the Point object is constructed/destructed only once. In theory, the overall construction/destruction should drop from 1900000000 to 20 ! We can call this overhead: “Silent Overhead”.

The optimized for loop will look as follows (Version 1):

// Version 1
Point p
; Point dummy; for (int i = 0; i < 100000000; ++i) { p.X = p.Y = i; dummy.X = p.X; dummy.Y = p.Y; }

Results

We chose number of iterations to be a rather very high 100000000 iteration to highlight the performance drop. Our experimental computer is equipped with a very fast Intel Core i7 processor with 8 logical processors. And 1 million iteration is too small for it, the i7 was able to execute 1 million iteration in 2 milliseconds in average !! That’s why we chose a very high number of iterations to scale this difference.

image

For Version 0 and Version 1, We ran the for-loop for 100 million iteration and take the average running time for 100 sample. The optimized Version 1 is approximately 3 times faster than Version 0 .

From the previous experiment, It is obvious that object construction/destruction can lead to a big drop in performance if they are called unnecessarily as in code snippet Version 0.

Key Points

  1. Watch out for the combinatorial explosion of created objects in complex hierarchies. The construction (destruction) of an object triggers recursive construction (destruction) of parent and member objects.
  2. Defer object construction (i.e local variable declaration, dynamic object allocation) to the scope in which it is manipulated. The object life cycle is not free of cost. At the very least, construction and destruction of an object may consume CPU cycles. Don’t create an object unless you are going to use it.
  3. Use the object constructor initialization list to complete the member object creation. Because compilers must initialize contained member objects prior to entering the constructor body. This will save the overhead of calling the assignment operator later in the constructor body. In some cases, it will also avoid the generation of temporary objects.

Do/ Don’t Do

Do this:

void Foo(int n)
{
    if (n == 0)
        return;
    else
    {
        HeavyClass heavyObj;
        heavyObj.DoWork();
    }
}

Don’t do this:

void Foo(int n)
{
    HeavyClass heavyObj;
    
    if (n == 0)
        return;
    else
    {
        heavyObj.DoWork();
    }
}

Do this:

class Person
{
public:
    Person(const char* name) : Name(name) {}
    string Name;
};

Don’t do this:

class Person
{
public:
    Person(const char* name) /* Name constructor called */
    {
        Name = name;
    }
    string Name;
};

Reference

  • Efficient C++ Performance Programming Techniques book By Dov Bulka, David Mayhew
Advertisements

Roots of Software Inefficiency

Being a GEEK ACMer or TopCoder beast (i.e design and analysis of algorithms) is about writing an efficient algorithm in the scope of 3 or 2 functions doing a very specific and limited task. However, writing a software that runs on customers’ computers is bigger than this. A typical windows commercial mid-sized software consists of a set of Executable, Services and DLLs interacting with each other to shape what is called software. The efficiency of algorithms and data structures is necessary but not sufficient: By itself, it does not guarantee good overall program efficiency. It is important to know the roots of software inefficiency if we care about writing fast one.

What are the factors that affect efficiency? Efficient C++ Performance Programming Techniques book By Dov Bulka, David Mayhew made a very good high-level categorization to these factors:

Design Efficiency This involves the program’s high-level design. To fix performance problems at that level you must understand the program’s big picture.  We are talking here about software architecture, UML diagrams, pseudo codes, algorithms, data-structures, and anything you can consider it language independent.

Code Efficiency Small-to medium-scale implementation issues fall into this category. Fixing performance in this category generally involves local modifications. For example, you do not need to look very far into a code fragment in order to lift a constant expression out of a loop and prevent redundant computations. The code fragment you need to understand is limited in scope to the loop body.

Both these 2 levels can be broken down more:

1. Design

1.1 Algorithms and Data Structures Technically speaking, every program is an algorithm in itself. Referring to “algorithms and data structures” actually refers to the well-known subset of algorithms for accessing, searching, sorting, compressing, and otherwise manipulating large collections of data. Oftentimes performance automatically is associated with the efficiency of the algorithms and data structures used in a program, as if nothing else matters which is inaccurate.

1.2 Program Decomposition This involves decomposition of the overall task into communicating subtasks, object hierarchies, functions, data, and function flow. It is the program’s high-level design and includes component design as well as inter-component communication. Few programs consist of a single component. A typical Web application interacts (via API) with a Web server, TCP sockets, and a database, at the very least.

2. Coding

2.1 Language Constructs A programming language is a tool we use to express to computers how to do a specific task. Whether you use C++, C#, Java and you care about performance, you need understand the cost of your programming language constructs so not to be shocked at run time when you program scale. C++ adds power and flexibility to its C ancestor (i.e Object Oriented capabilities). These added benefits do not come for free—some C++ language constructs may produce overhead in exchange.

2.2 System Architecture System designers invest considerable effort to present the programmer with an idealistic view of the system: infinite memory, dedicated CPU, parallel thread execution, and uniform-cost memory access. Of course, none of these is true—it just feels that way. Developing software free of system architecture considerations is also convenient. To achieve high performance, however, these architectural issues cannot be ignored since they can impact performance drastically. When it comes to performance we must bear in mind that:

  • Memory is not infinite. It is the virtual memory system that makes it appear that way.
  • The cost of memory access is non-uniform. There are orders of magnitude difference among cache, main memory, and disk access.
  • Our program does not have a dedicated CPU. We get a time slice only once in a while.
  • On a uniprocessor machine, parallel threads do not truly execute in parallel—they take turns.

If you write Windows software, you need read well about WinAPI and dig deep in Windows programming world, to understand how your host operating system – Windows in this case – will execute your program. This applies if you write software for Linux, iOS or any operating system. Good understanding of the host operating system is a must.

2.3 Libraries The choice of libraries used by an implementation can also affect performance. For starters, some libraries may perform a task faster than others. Because you typically don’t have access to the library’s source code, it is hard to tell how library calls implement their services. For example, to convert an integer to a character string, you can choose between
sprintf(string, “%d”, i); or an integer-to-ASCII function call, itoa(i, string); Which one is more efficient? Is the difference significant?

There is also the option of rolling your own version even if a particular service is already available in a library. Libraries are often designed with flexibility and reusability in mind. Often, flexibility and reusability trade off with performance. If, for some critical code fragment, you choose to put performance considerations above the other two, it might be reasonable to override a library service with your own home-grown implementation. Applications are so diverse in their specific needs, it is hard to design a library that will be the perfect solution for everybody, everywhere, all the time.

2.4 Compiler Optimizations Simply a more descriptive name than “miscellaneous,” this category includes all those small coding tricks that don’t fit in the other coding categories, such as loop unrolling, lifting constant expressions out of loops, and similar techniques for elimination of computational redundancies. Most compilers will perform many of those optimizations for you. But you cannot count on any specific compiler to perform a specific optimization.For ultimate control, you have to take coding matters into your own hands.

I remember how Visual Studio saved my team in an Image Processing performance competition in my faculty. In this competition your image processing package has to run many image processing algorithms and your package timing in each algorithm is used to rank it among the others. We optimized some of our algorithms manually, but didn’t have much time to optimize the others. I got an evil idea of enabling Visual Studio code optimization, and I was shocked by the results. The running time of many algorithms dropped down greatly and I couldn’t believe how C++ code optimization held by the compiler can be that effective.

Conclusion

  1. Teach yourself how to design and analyze algorithms and practice well (i.e problem solving through ACM Online Judge and TopCoder)
  2. Pick a programming language and master it (i.e read about its internals and understand the scary dark side of it).
  3. Know the internals of a certain operating system on which you prefer to write your software (e.g Windows, Linux or Mac OS programming).
  4. Write big multi-file, multi-module projects with real requirements.
%d bloggers like this: