The runtime cost of obfuscation
Morantex
Posts: 10
I've been running some performance measurements here on code that is obfuscated and comparing it code that is not.
Basically a test client/server is used to stress test a managed networking API - one that is wholly message based (as opposed to primitive sockets and their byte[] blocks).
This code does a lot - custom (v fast) serialization and intelligent buffering, message dismantling/reassembly and so on and a lot of work and extensive profiling has gone into making this very fast.
Several test clients randomly exchange hundreds of thousands of messages with a test server and we look at total per-process CPU against number of messages.
Here is what I am seeing on tests that run for approx 1 minute elapsed time:
Debug build w obfuscation: 75 uS / msg
Release build w obfuscation: 73 uS / msg.
Debug build n obfuscation: 60 uS / msg.
Release build n obfuscation: 50 uS / msg.
Now since RedGate supply both an obfuscator and a profiler - I'd very much like someone to tell me what specific obfuscation choices cause the most runtime CPU penalties.
I could spend hours tweaking various settings but there are numerous assemblies involved here and each one has its own SA project file, and an SA project supports a host of differing choices and combinations for them- so frankly this is a non-starter (I may disable string oriented settings as a quick experiment though).
What's clear here is that obfuscation of release build code can lead to code that runs slower than an ordinary debug build of that code.
In this case it's adding about 50% more CPU cost over the cost of "pure" release build code - and for a high performance API this is a big cost that undermines all of the original profiling effort.
So RedGate - how can you offer a product that helps us speed up code then at the same time offer us another product that slows it down again !
Thanks
Hugh
Basically a test client/server is used to stress test a managed networking API - one that is wholly message based (as opposed to primitive sockets and their byte[] blocks).
This code does a lot - custom (v fast) serialization and intelligent buffering, message dismantling/reassembly and so on and a lot of work and extensive profiling has gone into making this very fast.
Several test clients randomly exchange hundreds of thousands of messages with a test server and we look at total per-process CPU against number of messages.
Here is what I am seeing on tests that run for approx 1 minute elapsed time:
Debug build w obfuscation: 75 uS / msg
Release build w obfuscation: 73 uS / msg.
Debug build n obfuscation: 60 uS / msg.
Release build n obfuscation: 50 uS / msg.
Now since RedGate supply both an obfuscator and a profiler - I'd very much like someone to tell me what specific obfuscation choices cause the most runtime CPU penalties.
I could spend hours tweaking various settings but there are numerous assemblies involved here and each one has its own SA project file, and an SA project supports a host of differing choices and combinations for them- so frankly this is a non-starter (I may disable string oriented settings as a quick experiment though).
What's clear here is that obfuscation of release build code can lead to code that runs slower than an ordinary debug build of that code.
In this case it's adding about 50% more CPU cost over the cost of "pure" release build code - and for a high performance API this is a big cost that undermines all of the original profiling effort.
So RedGate - how can you offer a product that helps us speed up code then at the same time offer us another product that slows it down again !
Thanks
Hugh
Comments
Hugh
That's the feature that we know causes systematic slow-down.
Empirical evidence suggest level 2 (strictly valid) has least effect.