Solving Link error 1112

Yesterday I brushed off some old code I had not touched in quite a while. It was a layer manager plugin for Autodesk 3dsmax that I had written years ago. I don’t think I had even opened any project files for this project in the last 12 months. So I re-acquainted with the code, made some changes to the files, and went to recompile it. Now this project compiles to both 32 and 64 bit targets. So I randomly chose a 64 bit target, and built the sucker.

Imagine my shock when I encountered a linker error stating:

module machine type ‘X86’ conflicts with target machine type ‘x64’

I knew enough about this in the past, and never had a problem before. Like I said before, I hadn’t touched the code in long time. What’s more, is my code was under source control!

I opened up the project settings and double and triple checked my Linker Target Machine settings. Indeed it was properly set at: MachineX64. Perfect!

Still the problem persisted. Then I got worried that the library files I was linking in were corrupted, and somehow 32 bit libraries were intermixed with 64 bit libraries. I didn’t want to download the entire 3dsMax SDK again, in the off chance this hypothesis was true. So I researched a solution to check what type of libraries I had!

So I used the utility DumpBin.exe that is found here:

"C:Program Files (x86)Microsoft Visual Studio 9.0VCbindumpbin.exe"

And by the way, in windows 7 I did a search a few times for ‘dumpbin’, which failed every time. Turns out you have to search for dumpbin.exe. Go figure.

Anyways, I used this tool to dump out the functions in my lib file. Then I was able to inspect the functions and see what platform there were compiled for.

For instance, notice the ‘Machine’ entry in a function I dumped out using dumpbin.exe:

Version      : 0
Machine      : 8664 (x64)
TimeDateStamp: 49B974BD Thu Mar 12 14:46:53 2009
SizeOfData   : 0000002D
DLL name     : maxutil.dll
Symbol name  : ??0Path@Util@MaxSDK@@QEAA@PEBD@Z (public: __cdecl MaxSDK::Util::Path::Path(char const *))
Type         : code
Name type    : name
Hint         : 18
Name         : ??0Path@Util@MaxSDK@@QEAA@PEBD@Z

Notice the second line contains x64, a dead give away this is a function compiled for an x64 target platform. So the problem was, my project linked in a lot of libraries from the 3dsmax sdk. The 3dsmax sdk contains about 49 library files. How was I supposed to inspect every function from every library file?

Sounds like a job for a script:

@echo off
call "C:Program Files (x86)Microsoft Visual Studio 10.0VCvcvarsall.bat"

for %%f in (*.lib) do dumpbin.exe -headers %%f | findstr /c:"Machine      :" >> _lib-exports.txt"

pause
@echo on

So I put the above script into a batch file, and placed the batch file in the directory that contained all my library files. I iterated through all the library files, dumping out the header information. This was piped to find string, which searched for the machine information. I then appended that to a log file which I could inspect at my leisure. The log file contained 13642 lines looking like this:

Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)
Machine      : 8664 (x64)

So that was it. I opened the file in my favorite text editor, and a search for x86 turned up nothing. It was x64 all the way down.

So now what was I supposed to do? I was stuck. Let me review all I had done.

  1. My target machine linker setting was properly set.
  2. My build configuration setting was properly calling ‘x64’ and not win32.
  3. All the libraries I was importing were actually 64 bit libraries.

All appeared lost and hopeless until I ran across a tidbit on an MSDN forum.

Here the gentleman answered:

If you have left the ide at the default settings, then it will actually use the x86_amd64 compiler only. This is set under tools->options->projects and solutions->vc++ directories. Under amd64 executables the directory should be $(VCInstallDir)binx86_amd64 followed by $(VCInstallDir)bin.

This method works for all versions of windows so its a default setting. If you want to though you can change the x86_amd64 to just amd64.

So I looked at the executable settings for my x64 build configurations in VS 2008. It looked like this:

$(VCInstallDir)bin
$(WindowsSdkDir)bin

I changed it to this by adding the x86_amd64 directory:

$(VCInstallDir)binx86_amd64
$(VCInstallDir)bin
$(WindowsSdkDir)bin

And everything now works. My project now compiles and links just fine.

So I am left to surmise that some external add-on to visual studio hosed my executable tools settings for 64 bit configurations. I had off and on over the last year been using bullseye, which messes around with the settings in the project executable directories. So Perhaps that is what messed up that setting in visual studio. But now it’s fixed, and I am very happy.

Advertisements

_CrtCheckMemory() ignores small heap corruptions

The common runtime libraries have some useful debugging tools contained in crtdbg.h. In that header file is a useful method called _CrtCheckMemory(). This function is used to validate the debug heap. According to MSDN:

The _CrtCheckMemory function validates memory allocated by the debug heap manager by verifying the underlying base heap and inspecting every memory block. If an error or memory inconsistency is encountered in the underlying base heap, the debug header information, or the overwrite buffers, _CrtCheckMemory generates a debug report with information describing the error condition. When _DEBUG is not defined, calls to _CrtCheckMemory are removed during preprocessing.

So it seems like a good idea to put these in your application to check for memory corruptions.

But I found a problem, or a limitation in using this. This method seems to ignore small heap corruptions.

For example, If I wanted to overwrite an array by 5 elements, I would do the following:

void corruptMemory()
{
    const int ARRAY_SIZE = 500;
    int* iparray = new int[ARRAY_SIZE]; // a memory leak
    int offset = 5;
    int badIndex = ARRAY_SIZE + offset;
    iparray[badIndex] = 0xdeadbeaf; // heap corruption
}

While experimenting with this, I found the _CrtCheckMemory() method will not catch this over-run:

void bar()
{
    corruptMemory();
    _ASSERTE(_CrtCheckMemory()); // this should trigger the assert
}

While running the above code, I found that the _CrtCheckMemory() remains blissfully ignorant of the memory corruption, returns TRUE to the assert macro, and nothing happens!

Only when I set the variable offset to a value of 6 or higher will _CrtCheckMemory will find and detect something. Using the above code, this will manifest itself (in a debug build) in two assert message boxes:

The first assert:

 image

And the assert arising from the failure condition returned from _CrtCheckMemory to _ASSERTE:

image

Of course when I set the offset to a big enough value, say 89, I can corrupt something really important and screw up the CRT, and also crash the application on exit.

I have a project that demonstrates his. The project also demonstrates how to dump memory leaks to the output window at the end of program execution.

I’ll try to put the link here: