This video showcases some of the things I can do with the new rendering engine and tool transaction system in paint.net 4.0. Even the Paint Bucket tool can get awesome “WYSIWIGEWYEI” (What You See Is What You Get … Especially While You’re Editing It, which needs a better name) and Fine-Grained History (you can undo/redo every change, not just those that commit pixels to the layer).

One big annoyance of the Paint Bucket tool in every imaging app out there is that it doesn’t do a good job of letting you explore and be creative. There are two primary “inputs” for it: the origin point (where you clicked), and the tolerance setting. Where you click determines the point at which the flood fill algorithm is executed from, and which color is used as the basis for comparing to other colors to see if they are at a “distance” (Euclidean) that is less than the tolerance value. Colors that are at a distance less than the tolerance are filled in with whatever color or pattern you specify. Black and white are as far apart as possible and require a high tolerance value to “notice” each other, while shades of the same color are computed as relatively close to each other and will be included with lower tolerance values.

What happens in most imaging apps* is that you click somewhere with the Paint Bucket tool, look at the result, and decide that either you wish you’d clicked somewhere else or used another tolerance value. On a rare occasion, it looks perfect and you’re done.

Then you click undo.

Next, you click somewhere else, possibly after editing the tolerance in the toolbar. Then you realize it’s not exactly what you want, so …

Then you click undo. And repeat. And repeat, and repeat, and repeat.

In paint.net 4.0 I’m working to finally get rid of that repetition, which is work I started with the new Gradient tool I added back in 3.0 (2006!). Once you click somewhere with the Paint Bucket tool, you can go edit the tolerance in the toolbar which essentially causes your click action to be re-executed. You can also move your mouse down into the canvas where you clicked, and drag around a handle which will move the origin point. You can change the color, or anything else that affects how the fill is rendered. You can use the undo and redo commands to walk through every adjustment that you’re trying out.

This is a very powerful addition to the tools in paint.net which really enables you to quickly explore the creative landscape in a way that no other image editing software can. It also lets you gain an intuitive understanding of settings that do not necessarily lend themselves to short, intuitive descriptions (like tolerance!), but which are easily learned through interactive exploration. This video was recorded a few weeks ago. Since then I’ve added antialiasing as well as the ability to choose between sampling the current layer or the whole image, and have also made other performance improvements. (I’ve also removed the “old” Paint Bucket tool, which is why you see the “new” version of it sitting at the bottom of the Tools window in the video.)

This is my first video posting, we’ll see how it goes! I didn’t think I could properly discuss this feature with just words and pictures.

* every one that I know of, but I used the word “most” just in case I’m wrong Winking smile

.NET Framework 4.5 contains a very cool new feature called Multi-Core JIT. You can think of it as a profile-guided JIT prefetcher for application startup, and can read about it in a few places …

I’ve been using .NET 4.0 to develop Paint.NET 4.0 for the past few years. Now that .NET 4.5 is out, I’ve been upgrading Paint.NET to require it. However, due to a circumstance beyond my control at this moment, I can’t actually use anything in .NET 4.5 (see below for why). So Paint.NET is compiled for .NET 4.0 and can’t use .NET 4.5’s features at compile time, but as it turns out they are still there at runtime.

I decided to see if it was possible to use the ProfileOptimization class via reflection even if I compiled for .NET 4.0. The answer: yes! You may ask why you’d want to do this at all instead of biting the bullet and requiring .NET 4.5. Well, you may need to keep your project on .NET 4.0 in order to maintain maximum compatibility with your customers who aren’t yet ready (or willing Smile) to install .NET 4.5. Maybe you’d like to use the ProfileOptimization class in your next “dot release” (e.g. v1.0.1) as a free performance boost for those who’ve upgraded to .NET 4.5, but without displacing those who haven’t.

So, here’s the code, which I’ve verified as working just fine if you compile for .NET 4.0 but run with .NET 4.5 installed:

using System.Reflection;

Type systemRuntimeProfileOptimizationType = Type.GetType("System.Runtime.ProfileOptimization", false);
if (systemRuntimeProfileOptimizationType != null)
{
    MethodInfo setProfileRootMethod = systemRuntimeProfileOptimizationType.GetMethod("SetProfileRoot", BindingFlags.Static | BindingFlags.Public, null, new Type[] { typeof(string) }, null);
    MethodInfo startProfileMethod = systemRuntimeProfileOptimizationType.GetMethod("StartProfile", BindingFlags.Static | BindingFlags.Public, null, new Type[] { typeof(string) }, null);

    if (setProfileRootMethod != null && startProfileMethod != null)
    {
        try
        {
            // Figure out where to put the profile (go ahead and customize this for your application)
            // This code will end up using something like, C:\Users\UserName\AppData\Local\YourAppName\StartupProfile\
            string localSettingsDir = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
            string localAppSettingsDir = Path.Combine(localSettingsDir, "YourAppName");
            string profileDir = Path.Combine(localAppSettingsDir, "ProfileOptimization");
            Directory.CreateDirectory(profileDir);

            setProfileRootMethod.Invoke(null, new object[] { profileDir });
            startProfileMethod.Invoke(null, new object[] { "Startup.profile" }); // don’t need to be too clever here
        }

        catch (Exception)
        {
            // discard errors. good faith effort only.
        }
    }
}

I’m not sure I’ll be using this in Paint.NET 4.0 since it uses NGEN already, but it’s nice to have this code snippet around.

So, why can’t I use .NET 4.5? Well, they removed support for Setup projects (*.vdproj) in Visual Studio 2012, and I don’t yet have the time or energy to convert Paint.NET’s MSI to be built using WiX. I’m not willing to push back Paint.NET 4.0 any further because of this. Instead, I will continue using Visual Studio 2010 and compiling for .NET 4.0 (or maybe I’ll find a better approach). However, at install time and application startup, it will check for and require .NET 4.5. The installer will get it installed if necessary. Also, there’s a serialization bug in .NET 4.0 which has dire consequences for images saved in the native .PDN file format, but it’s fixed in .NET 4.5 (and for .NET 4.0 apps if 4.5 just happens to be what’s installed).

I finally succumbed and bought a copy of Diablo 3 today, only to found out that it just doesn’t work:

Argh! No matter what I did, it would always crash. Every single time, over and over and over and over again.

In a last act of desperation before borrowing the DVD from a friend to try and load it that way, I had some Raymond Chen style psychic insight and thought it might be a multithreading bug. You see, I just put together a brand new Dual Xeon E5-2687W system. It is a beast: dual processor, 8 cores each, with HyperThreading. That means Task Manager shows 32 tiny little performance graphs. It makes compiling Paint.NET really fast (lots of C++/CLI these days), and is killer for working on all that multithreaded rendering code.

Anyway, the fix is a bit clumsy but it seems to work (so far! we’ll see if it still works after all the downloading is done):

  1. Download the “Diablo-III-Setup-enUS.exe” as usual, from Blizzard’s website.
  2. Run it, as usual (double click on it).
  3. When you get the UAC prompt, do NOT click Yes (yet).
  4. Instead, open up Task Manager and find the program in the “Processes” tab (Diablo-whatever.exe)
  5. Right click on it and then click on the “Set Affinity…” command.
  6. Make sure only 1 of the CPU’s checkboxes is enabled. If you’re on Windows 7, just click the “<All Processors>” node to uncheck everything, and then click on “CPU 0” to enable it. This will lock the program to just 1 CPU core/thread, minimizing the risk of the hypothesized multithreading bug.
  7. Now you can click on Yes in the UAC prompt… and tada, it should work.

I found some battle.net forum threads where tons of people are having this issue, and it goes on and on for pages and pages without any fix for the poor souls (so to speak).

Once it starts downloading you’ll probably want to do the same thing for “Blizzard Launcher.exe” except that this time you’ll 1) have to click the “Show processes from all users” button (bottom of Task Manager in the Processes tab), and then 2) enable all CPUs instead of having any of them disabled.

Hope this helps anyone else who’s having this frustrating problem.

Update: Once Diablo 3 finished downloading, it still would not start after clicking the Play button. “Diablo III.exe” would pop up in Task Manager, and then silently disappear a few seconds later. According to the Windows Event Viewer, it was crashing. However, I did get it to work, and the trick is to “Set Affinity” on explorer.exe and give it something like 4 of the CPU cores. Since processor affinity is inherited, running Diablo 3 from within Windows Explorer (aka your desktop) now works. Hey Blizzard! Try testing on something more than a dual core Pentium D!

I’ve come up with a trick that can be used in some very specific scenarios in order to avoid extra array copying when calling into native code from managed code (e.g. C#). This won’t usually work for regular P/Invokes into all your favorite Win32 APIs, but I’m hopeful it’ll be useful for someone somewhere. It’s not even evil! No hacks required.

Many native methods require the caller to allocate the array and specify its length, and then the callee fills it in or returns an error code indicating that the buffer is too small. The technique described in this post is not necessary for those, as they can already be used optimally without any copying.

Instead, let’s talk about the general problem if you’re calling a native method which does the array allocation and then returns it. You can’t use it as a “managed array” unless you copy it into a brand new managed array (don’t forget to free the native array). In other words, native { T* pArray; size_t length; } cannot be used as a simple managed T[] as-is (or even with modification!). The managed runtime didn’t allocate it, won’t recognize it, and there’s nothing you can do about it. Very few managed methods will accept a pointer and a length, and will require a managed array. This is particularly irksome when you want to use System.IO.Stream.Read() or Write() with bytes from a native-side buffer.

Paint.NET uses a library written in classic C called General Polygon Clipper (GPC), from The University of Manchester, to perform polygon clipping. This is used for, among other things, when you draw a selection with a mode such as add (union), subtract (exclude), intersect, and invert (“xor”). I blogged about this 4 years ago when version 3.35 was about to be released: using GPC made these operations immensely faster, and I saved a lot of time and headache by purchasing a commercial use license for the library and then integrating it into the Paint.NET code base. tl;dr: The algorithms for doing this are nontrivial and rife with special corner cases, and I’d been struggling to find enough sequential time to implement and debug it on my own.

Anyway, the data going into and coming out of GPC is an array of polygons. Each polygon is an array of points, each of which is just a struct containing X and Y as double-precision floating point values. To put it simply, it’s just a System.Windows.Point[][] (I actually use my own geometry primitives nowadays, but that’s another story, and it’s the same exact thing).

Getting this data into GPC from the managed side is easy. You pin every array, and then hand off the pinned pointers to GPC. Since you can’t use the “fixed” expression with a dynamic number of elements, I use GCHandle directly and stuff them all into GCHandle[] arrays for the duration of the native call. This is great because on the managed side I can work with regular ol’ managed arrays, and then send them off to GPC as “native arrays” by pinning them and using the pointers obtained from GCHandle.AddrOfPinnedObject().

Now, here’s the heart breaking part. GPC allocates the output polygon using good ol’ malloc*. So when I get the result back on the managed side, I must copy every single last one so that I can use it as a Point[] (a managed array). This ends up burning a lot of CPU time, and can cause virtual address space claustrophobia on 32-bit/x86 systems when working with complex selections (e.g. Magic Wand), as you must have enough memory for 2 copies of the result while you’re doing the copying. (Or you could free each native array after you copy it into a managed array, but that’s an optimization for another day, and isn’t as straightforward as you’d think because freeing the native memory requires another P/Invoke, and those add up, and so it might not actually be an optimization.)

But wait, there’s another way! Since the code for GPC is part of my build, I can modify it. So I added an extra parameter called gpc_vertex_calloc:

    typedef (gpc_vertex *)(__stdcall * gpc_vertex_calloc_fn)(int count);

    GPC_DLL_EXPORT(
    void, gpc_polygon_clip)(
        gpc_op                set_operation,
        gpc_polygon          *subject_polygon,
        gpc_polygon          *clip_polygon,
        // result_polygon holds the arrays of gpc_vertex, aka System.Windows.Point)
        gpc_polygon          *result_polygon, 
        gpc_vertex_calloc_fn  gpc_vertex_calloc);

("gpc_vertex” is GPC’s struct that has the same layout as System.Windows.Point: X and Y, defined as a double.)

In short, I’ve changed GPC so that is uses an external allocator by passing in a function pointer it should use instead of malloc. And now if I want I can have it use malloc, HeapAlloc, VirtualAlloc, or even the secret sauce detailed below.

On the managed side, the interop delegate definition for gpc_vertex_calloc_fn gets defined as such:

    [UnmanagedFunctionPointer(CallingConvention.StdCall)]
    public delegate IntPtr gpc_vertex_calloc_fn(int count);

And gpc_polygon_clip’s interop defintion is like so:

    [DllImport(“PaintDotNet.SystemLayer.Native.x86.dll”, CallingConvention = CallingConvention.StdCall)]
    public static extern void gpc_polygon_clip(
        [In] NativeConstants.gpc_op set_operation,
        [In] ref NativeStructs.gpc_polygon subject_polygon,
        [In] ref NativeStructs.gpc_polygon clip_polygon,
        [In, Out] ref NativeStructs.gpc_polygon result_polygon,
        [In] [MarshalAs(UnmanagedType.FunctionPtr)] NativeDelegates.gpc_vertex_calloc_fn gpc_vertex_calloc);

So, we’re half way there, and now we need to implement the allocator on the managed side.

    internal unsafe sealed class PinnedManagedArrayAllocator<T>
        : Disposable
          where T : struct
    {
        private Dictionary<IntPtr, T[]> pbArrayToArray;
        private Dictionary<IntPtr, GCHandle> pbArrayToGCHandle;

        public PinnedManagedArrayAllocator()
        {
            this.pbArrayToArray = new Dictionary<IntPtr, T[]>();
            this.pbArrayToGCHandle = new Dictionary<IntPtr, GCHandle>();
        }
 
        // (Finalizer is already implemented by the base class (Disposable))

        protected override void Dispose(bool disposing)
        {
            if (this.pbArrayToGCHandle != null)
            {
                foreach (GCHandle gcHandle in this.pbArrayToGCHandle.Values)
                {
                    gcHandle.Free();
                }

                this.pbArrayToGCHandle = null;
            }

            this.pbArrayToArray = null;

            base.Dispose(disposing);
        }

        // Pass a delegate to this method for “gpc_vertex_calloc_fn”. Don’t forget to use GC.KeepAlive() on the delegate!
        public IntPtr AllocateArray(int count)
        {
            T[] array = new T[count];
            GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);
            IntPtr pbArray = gcHandle.AddrOfPinnedObject();
            this.pbArrayToArray.Add(pbArray, array);
            this.pbArrayToGCHandle.Add(pbArray, gcHandle);
            return pbArray;
        }

        // This is what you would use instead of, e.g. Marshal.Copy()
        public T[] GetManagedArray(IntPtr pbArray)
        {
            return this.pbArrayToArray[pbArray];
        }
    }

(“Disposable” is a base class which implements, you guessed it, IDisposable, while also ensuring that Dispose(bool) never gets called more than once. This is important for other places where thread safety is very important. This class is specifically not thread safe, but it should be reasonably easy to make it so.)

And that’s it! Well, almost. I’m omitting the guts of the interop code but nothing that should inhibit comprehension of this part of it. Also, the above code is not hardened for error cases, and should not be used as-is for anything running on a server or in a shared process. Oh, and I just noticed that my Dispose() method has an incorrect implementation, whereby it shouldn’t be using this.pbArrayToGCHandle, specifically it shouldn’t be foreach-ing on it, and should instead wrap that in its own IDisposable-implementing class … exercise for the reader? Or I can post a fix later if someone wants it.

After I’ve called gpc_polygon_clip, instead of copying all the arrays using something like System.Runtime.InteropServices.Marshal.Copy(), I just use GetManagedArray() and pass in the pointer that GPC retrieved from its gpc_vertex_calloc_fn, aka AllocateArray(). When I’m done, I dispose the PinnedManagedArrayAllocator and it unpins all the managed arrays. And this is much faster than making copies of everything.

Now, this isn’t the exact code I’m using. I’ve un-generalized it in the real code so I can allocate all of the arrays at once instead of incurring potentially hundreds of managed <—> native transitions for each allocation. The above implementation also doesn’t have a “FreeArray” method; I had one, but I ended up not needing it, so I removed it.

So the next time you find yourself calling into native code which either 1) allows you to specify an external allocator, or 2) is part of your build, and that 3) involves lots of data and thus lots of copying and wasted CPU time, you might just consider using the tactic above. Your users will thank you.

Lastly, I apologize for my blog’s poor code formatting.

Legal: I hereby place the code in this blog post into the public domain for anyone to do whatever they want with. Attribution is not required, but I certainly appreciate if you send me an e-mail or post a comment and let me know it was useful for you.

* Actually in Paint.NET 3.5, I changed it to use HeapAlloc(). This way I can get an exception raised when it runs out of memory, instead of corrupt results. This does happen on 32-bit/x86, especially when using the Magic Wand on large images.

It’s been awhile since I talked about some of the smaller features that have been implemented for Paint.NET 4.0. So, without further ado …

Light Color Scheme

Paint.NET 3.5 uses a blue color scheme. For 4.0, you can still use that but the default is now the “Light” color scheme. The differences can be subtle but change is nice to have. The light theme also uses a gray canvas background (#CFCFCF to be precise), which can be important for color matching.

 
 

Color Picker Enhancements

Ed Harvey, who wrote and has been maintaining "Ed Harvey Effects,” one of the most popular and interesting plugin packs, has contributed some more features to Paint.NET 4.0 recently. The first two are in the Color Picker and give you the ability to set the sampling size as well as whether it will sample just the current layer or the whole image:

Copy Merged

Ed Harvey is also responsible for implementing another highly requested feature, Copy Merged. When you have a selection active, Edit->Copy will take the pixels from the current layer, while Edit->Copy Merged will use the whole image. In Paint.NET v3.5 you could do this but it required you to 1) Flatten the image, 2) Copy, and finally 3) Undo the Flatten. Paint.NET 4.0 will let you do that in one keystroke, and mirrors Photoshop’s functionality and keyboard shortcut. It also means you don’t have to wipe out your Redo history.

Tool Blending Modes

Paint.NET has always had an option to let you choose between Normal and Overwrite blending. The latter is necessary if you ever want to use anything but the Eraser tool in order to “draw transparent pixels.” This has been extended to include all of the layer blend modes, and still includes Overwrite. Currently this only works on the tools which have been upgraded to the new rendering system, namely the Pencil and Gradient tools, but all the others will be upgraded in due time. (I have already started upgrading the shape tools, for instance.)

Here’s an example comparing Normal and Xor blending modes with a rounded rectangle*:

Layer Reordering with Drag-and-Drop

In Paint.NET v3.5 you have to use the cumbersome Move Layer Up and Move Layer Down buttons to change layer ordering. Paint.NET 4.0 adds what you would naturally want to do here, namely the ability to just drag-and-drop the layers to reorder them. In addition, there are some nice animations for this and all the other things that can change the contents of the Layers window.

Antialiased Selections

Whenever you have a selection active, all drawing is clipped to it. Paint.NET 4.0 can finally do this clipping with antialiasing. This results in a much smoother edge. This was actually quite simple to implement with the new rendering engine that’s in place for 4.0. (Note: Feathered selections and other gizmos are another matter entirely and will hopefully make it into a post-4.0 release without too much of a wait.)

The first option gives you the same rendering that Paint.NET v3.5 and earlier uses. The second uses 2×2 super sampling on the clipping mask, and the third uses 3×3 super sampling. I experimented with 4×4 super sampling but the improvement wasn’t very noticeable; in addition, performance went down and memory usage went up.

Here’s an example of the quality levels with a circular selection that’s had a gradient drawn inside of it:

Right now the default is Antialiased (2×2 super sampling). I’ll be doing some further experimenting, and decide whether the default should be High Quality and whether the “normal quality” option should even be present.

Anyway, that’s all for now!

* Astute readers may notice that the rounded rectangle’s corner radius does not match what 3.5 uses … yes, this will finally be configurable. Right now I’ve just got a test tool that renders a fixed size, but in short order the shape tools will get some fantastic upgrades, including configuring the corner radius for a rounded rectangle.

This is just a little note to explain why the forum may not be accessible for a little while. It’s moving to a better server, although it’ll have the same http:// location.

You shouldn’t have to do anything other than be please be patient for the next day or so, and then things should be back to normal once all the new DNS stuff propagates.

Apparently we were put on the wrong server during the last migration, and just recently we’ve been bumping into all of its CPU usage limitations :)

Back in 2000, I had just finished my freshman year of college at Washington State University and was back at the best summer job you could possibly imagine: sales associate at the local OfficeMax (office supplies store). Now, by “best” I really mean “boring” but it wasn’t a bad job, especially for a 19 year old (at least I wasn’t flipping burgers, right?). Selling printer ink and paper wasn’t the best use of my time, but maybe there’s an analogy with Einstein working at the patent office (we can all dream).

Anyway, to pass the time I decided to see if I could write a fully professional-quality Windows utility from start to finish in just a few weeks. I had recently read an article by Mark Russinovich about the new Windows 2000 defragmentation APIs, and it had source code for a Contig command-line utility that would defragment 1 file. (Back then I remember there being some source code but I can’t find that now.) Using that information as a basis I decide to write my own disk defragmenter, because clearly a college sophomore knows how to do this stuff. It would have a GUI version (“Fraginator”) and a command-line version (unfrag.exe), full CHM documentation, and a mascot. I wrote it in some mangled form of Visual C++ 6.0 with STL, if I remember correctly.

“Defrag, baby!” The Fraginator was not joking around with your data (I was a bit goofier with my coding back then). The picture is actually stolen from Something Awful’s satirical Jeff K from one of his comics where he thinks he’s a buff super hero or something (warning: those links may not be safe for work, or even safe for life, but they are at least occasionally hilarious). Hopefully Lowtax won’t come after me for that.

I finished the project up, put it up with source code (GPL) on my podunk of a website (I think it was hosted at zipcon.net, a small Seattle ISP), and mostly just forgot about it…

… until last week when I searched around for it and discovered that the ReactOS guys had started to include it in the applications distro about, oh, 6 years ago. They had even converted it to Unicode and translated it to a dozen languages or so. I thought, hey that’s great, someone actually likes it and is maybe even using it! I certainly was not expecting to find anything of the sort.

I was browsing through their “new” source code tree for it and immediately found a relic in the dialog resource file, clearly from a goofier moment:

I think I had this label control in there for layout purposes only, as it clearly doesn’t show up in the UI when you boot it up. But wait, it gets better. They haven’t removed this (not a bad thing), and in fact they’ve translated it.

So there you go. “I am a monkey, here me eeK” in Norwegian (I think). If you scout around with a web search you should be able to find a bunch of other translations. The French one is probably the most romantic pick-up line you’ll ever find, and there’s no need to thank me for your next success at the bars.

The last timestamp on the Fraginator.exe sitting on my hard drive is from 2003 and it doesn’t seem to work on Windows 7 anymore, unless you use it on a FAT32 USB stick. I doubt it’ll even compile in Visual Studio 2010. Oh well Smile I’m glad the ReactOS guys are having use and fun with it. If you want the source, you’re better off checking out their copy of it. I don’t know if I even have a ZIP of that lying around anymore, and they’ve made decent improvements since then anyway.

Every once in awhile you get an idea for an optimization that is so simple and so effective that you wonder why you didn’t think of it much earlier. As a case in point, someone recently posted to the forum that their image was taking a long time to save. It was an image comprised of 20 layers at 1280×720 pixels, and 20 seconds was deemed to be too long. The guy had a brand new AMD Phenom II X4 960T 3.0GHz gizmo with 4 CPU cores, plenty of RAM, and an SSD. It was obviously bottlenecked on the CPU because Task Manager was pegged at 100% while saving.

Please wait forever

The only advice I could give for the latest public release of Paint.NET (v3.5.10) was that if you need it to save quicker, either 1) use fewer layers, or 2) get a faster CPU with more cores. This is very technical advice, and in some circles that works just fine especially if it’s practical and economical, or if you’re dependent on saving time in order to make money (saving 5 minutes every day is often worth $100 to buy more RAM). Using fewer layers isn’t a good solution though, and replacing the CPU usually isn’t very practical either especially if it’s already a brand new upgrade.

It got me thinking though, that Paint.NET can still do better. If you’re working with a lot of layers then chances are that each layer is mostly blank space. Lots of transparent pixels with an RGBA value of #00000000 and a simple integer value of 0. So why not find a way to short-circuit that? Paint.NET does not yet used a “tiled” memory management scheme for layer data, but there was still a very good, very simple, and very effective solution just begging to be implemented.

PDN images contain a header, a bunch of structural information (# of layers, etc.), and then a whole lot of raw pixel data compressed using GZIP. It isn’t one long GZIP stream, however. Each layer is broken up into 256 KB chunks which are compressed separately. This does increase the file size, but not by much (a percent or two), and it lets me compress each block in its own thread and makes saving a much faster ordeal than it otherwise would be. Save and load times are able to drop almost linearly with the number of CPU cores.

So yesterday, in the 4.0 code base, I fixed this scenario further. For regions of the image that contain “homogenous values,” which would be a 256 KB chunk of all the same color, I compress it once and then cache the result. The next time I find a region that has the same homogenous color pattern, I skip the compression altogether and emit the result of the previous time I compressed it. If a region isn’t “homogenous” then it’s discoverable pretty quick and therefore doesn’t waste much CPU time.

(These 256KB regions do not have any geometric shape associated with them, such as being a 128×128 tile. A 256KB chunk in an 800×600 image would be the first 81 rows, and then most of the 82nd row. The next chunk would start off near the end of the 82nd row, and then include almost the next 81 rows of pixel data.)

The result? About 50% faster save times! The first time you save the image will take the longest. After that the cache will be “warmed up” and saving is even faster, resulting in up to about a 90% reduction in save time.

This is a great optimization: there’s no change to the PDN format, so images saved in 4.0 are still completely compatible with earlier versions or with other applications that have done the grunt work to support PDN (I think GIMP does nowadays). It’s just faster.

Anyway, it’s been awhile since I posted. Fear not, work is still progressing on 4.0. Silence is just an indicator that I’ve been really busy, not abandonment. Ed Harvey has contributed a few really cool features that have been popular requests, such as the ability to use the Color Picker tool to sample all layers, and to sample the average from a 3×3, 5×5, 11×11, 33×33, or 55×55 region (apparently that’s what Photoshop does). He also added a few enhancements to the color wheel so you can use Ctrl, Alt, or Shift to constrain the hue, saturation, or value. I’ve finally added drag-and-drop to the Layers window for moving layers around, along with spiffy animations and transitions (it uses the same code from the newer image thumbnail list at the top of the window). I even added a setting to choose between the Blue theme (from 3.5) and a new Light/white theme (which is the new default). I’ve decided to cut some features in the interest of being able to finish and ship 4.0 this year, such as a plugin manager, but there’s always another train coming (4.1, 4.2, etc.) so I wouldn’t hold a memorial service just yet.

Sort of, anyway. They don’t touch the malware side of the issue at all. I got this in my inbox just now as part of their “Software Publisher Newsletter,” written by their Vice President & General Manager, Sean Murphy:

We are on the verge of fulfilling our vision of coming to market with an installer model that delivers files faster and more efficiently to users, while enabling developers to a) opt-in to the Installer, b) influence the offers tied to their files, c) gain reporting insight into the download funnel, and d) share in the revenue generated by the installer.

The ability to opt-in is not a feature request. It’s a Day 1 feature requirement. Download.com has been making money on other people’s software by barfing malware all over their customer’s systems (and by “their” I’m referring to the software authors).

Consider the Customer Experience Improvement Program (CEIP) that Microsoft has as part of Windows, Office, and numerous other software they release. It gathers information about how you use the software (e.g. which features you used, which buttons you clicked), and reports it to Microsoft. This clearly has enormous privacy implications. But, you have to opt-in to use it. The software asks clearly and politely if you’ll let them do this. The data is recorded in an anonymous manner. Things like permission and anonymity are not “features.” They are requirements.

Oh, and Microsoft actually uses that data to improve the software they release. Bazinga.

He continues:

First, on the press that surfaced yesterday: a developer expressed anger and frustration about our current model and how his file was being bundled. This was a mistake on our part and we apologize to the developer and user communities for the unrest it caused. As a rule, we do not bundle open source software and in addition to taking this developers file out of the installer flow, we have gone in and re-checked all open source files in our catalog.

Ok, whatever Sean. Nobody believes you. You’re not sorry about what you did, you’re sorry that people noticed and cared. You care that the bad press crossed a quantitative threshold, and that it just might affect your quarterly profit reports.

I wanted to verify that they had indeed fixed their catalog of open source software. VLC Player is, in fact, not being packed with their download wrapper (earlier blogs reported it was being bundled). Same goes for GIMP, GIMP Portable, and GTK+ 2 Runtime Environment (which are the first 3 hits if you search for “GIMP”). So from this tiny sample size of 4, it seems he is at least telling the truth for once. (I’d provide links to their pages on download.com, but I’d really rather not contribute to their page rank in any way.)

You can read the whole thing, if you’re really bored, over at their website.

I posted earlier about how download.com was wrapping Paint.NET with their own download manager / installer which would install a toolbar or some other stupid thing. Thankfully this issue was resolved, although not without a few bumps (“yes we removed it” … 24 hours later … “no you didn’t” … 6 hours later … “OH sorry we’ve fixed it” … “damn right you have”).

“AngryTechnician” just tipped me off that now this issue is starting to get even more attention. It looks like they’re even wrapping Nmap, a free Windows utility for doing some kind of network security scan (which I’m not familiar with).

It is bad enough when software authors include toolbars and other unwanted apps bundled with their software. But having Download.Com insert such things into 3rd party installers is even more insidious. When users find their systems hosed (searches redirected, home pages changed, new hard-to-uninstall toolbars taking up space in their browser) after installing software, they are likely to blame the software authors. But in this case it is entirely Download.com’s fault for infecting the installers! So while Download.Com takes the payment for exploiting their user’s trust and infecting the machines, it is the software authors who wrongly take the blame! Of course it is users who pay the ultimate price of having their systems infected just to make a few bucks for CNET.”

Apparently some antivirus software has gotten on the right side of this issue and are now classifying the CNET download wrapper as malware.

Now, here’s the hilarity. CNET claims that this thing “powers secure downloads”. If you read that post, it’s clear they’re grasping at straws.

“In addition to making our downloads more secure, it has extra features like the ability to pause a download and launch the software installer immediately after it’s finished downloading.”

Whoa there … I can pause downloads?! Stop the presses!

Look, CNET, this is not a great monetization tool for your partners. It’s a fucking virus and you’re trashing the goodwill and reputation of everyone around you in order to make a buck. One thing I realized early on is that is not how you build a career (it’s one reason why I refuse to do any type of bundling). This is not an exciting opportunity. Look at it this way: while you’re at the bar, sharing a drink with friends, and they’re talking about something cool they did or a tricky technical problem they solved at work, you are going to keep your mouth shut. Because you’re ashamed of your job.

For continued hilarity, check out the title of this blog post. O rly? “Be careful when downloading software?” Does that advice include editing your hosts override file so that download.com redirects to disney.com?

Follow

Get every new post delivered to your Inbox.

Join 246 other followers