The primary goal of this update is preparing for the v4.0 release: v3.5.10 will not be able to offer the v4.0 update, but v3.5.11 will. I’m not a fan of putting out an update for purely infrastructure purposes so I’ve also reverse-ported a handful of low-risk improvements from the v4.0 code base. This way we all win.

As usual, you can download it directly from the website or you can use the built-in updater via Utilities –> Check for Updates.

Here are the changes for this release:

  • Fixed: The Gaussian Blur effect was incorrectly calculating alpha values for non-opaque pixels. (http://forums.getpaint.net/index.php?/topic/18483-gaussian-blur-mistreats-alpha/)
  • Improved performance of the Sharpen effect by about 25%
  • Improved performance of the Median effect by about 30%
  • Improved performance of the Fragment effect by about 40%
  • Improved performance of the Unfocus effect by about 100%
  • Reduced memory usage when many selection manipulation operations are in the history/undo stack (the undo data is now saved to disk)
  • The built-in updater now supports upgrading to paint.net 4.0 (once it’s available)

When I announced the beta for v3.5.11, I also talked a little about 4.0 and the progress that’s been made. If you haven’t read that, please catch up!

There have been rumors floating around that Paint.NET is “dead.” This is not true! Smile The 4.0 update has simply been a huge project, and I only have a limited amount of time at my disposal. (Apparently if you go 22 months without an update, that means “dead” in Internet years. Sorry!) Plus, sometimes you just want to go out and enjoy summer instead of writing code.

For comparison, v3.5.11 is about 203,000 lines of code, whereas v4.0 is at about 391,000. What’s new? Well, it has a brand new, asynchronous, fully multithreaded, hardware accelerated (via Direct2D) rendering engine that performs very happily with huge and large images, all while consuming less memory than v3.5. I’m not exaggerating the performance scaling either: whether you’ve got a 16-core Dual Xeon or a 400 megapixel image, paint.net 4.0 will be quite happy (other combinations are also very copacetic, of course).

The whole UI has been updated to use newer rendering toolkits such as Direct2D and DirectWrite, and the main window has a greatly simplified and consolidated layout. All of the drawing tools have been updated to use the new rendering system along with a new “fine-grained” history system (to clarify: the UI for history is the same, but the underlying code is radically different). It’s made possible a much richer “WYSIWYG” model for editing which is exemplified by the new Paint Bucket and Magic Wand tools. Selection rendering quality and performance are way up. The brush tools (Paintbrush, Eraser, Clone Stamp, Recolor) now support soft brushes, and the new Shapes tool replaces the 4 separate shape tools that are in v3.5 while providing many more shapes to use. There are also a bunch of other small, miscellaneous improvements that span across the UI and tools, including things like “Copy Merge” and antialiased selection rendering.

Anyway, enjoy!

This is probably not the update you were expecting Smile I need to push out an update to v3.5 in preparation for the eventual release of v4.0, and it’s necessary to do this sooner rather than later to make sure everyone is up-to-date by then. I’m releasing a “beta” today to make sure everything is still working (compilers, packagers, updaters), and then I’ll be pushing out the Final/RTM in a few days.

The primary goal of this update is preparing for the v4.0 release: v3.5.10 will not be able to offer the v4.0 update, but v3.5.11 will. I’m not a fan of putting out an update for purely infrastructure purposes so I’ve also reverse-ported a handful of low-risk improvements from the v4.0 code base. This way we all win.

As usual, you can download it directly from the website or you can use the built-in updater. Make sure that you enable “Also check for pre-release (beta) versions,” which you can do by going to Utility –> Check for Updates, and then clicking on the Options button.

Here are the changes for this release:

  • Fixed: The Gaussian Blur effect was incorrectly calculating alpha values for non-opaque pixels. (http://forums.getpaint.net/index.php?/topic/18483-gaussian-blur-mistreats-alpha/)
  • Improved performance of the Sharpen effect by about 25%
  • Improved performance of the Median effect by about 30%
  • Improved performance of the Fragment effect by about 40%
  • Improved performance of the Unfocus effect by about 100%
  • Reduced memory usage when many selection manipulation operations are in the history/undo stack (the undo data is now saved to disk)
  • The built-in updater now supports upgrading to paint.net 4.0 (once it’s available)

As for paint.net 4.0, progress has been speeding up. All of the tools have been ported and upgraded for the new rendering and history systems, and there’s only 1 or 2 small feature left. Once those are done, which should be soon since they’re fairly simple and straightforward, I’ll be in strict bug fixing mode in preparation for a public alpha release! As usual I have no promises as to when this will happen, but we’ll go with “soon.”

Another recent change in 4.0 that I’m very happy about is further improvements to selection rendering performance. I talked about this awhile ago when I detailed how the selection is now rendered using background threads and hardware acceleration. Back then I also said, “Manipulating selections is still just as slow as it ever was, and over time I plan to move that work off the UI thread.” Well I’m happy to report that I’ve now been able to move almost all of the remaining CPU-intensive geometry processing off the UI thread, and I’ve also added a few other tricks that make it so that most selection manipulation can be done at a full 30-60 frames per second! If you’ve ever drawn a complex selection, either by hand or with the Magic Wand, and then proceeded to move/scale/rotate it with the Move Selection tool, you’ve probably experienced severe lag. This is all now almost entirely gone, and it is very cool stuff, and it’s not isolated to that scenario.

Wow, it’s been awhile since I posted! Let’s see what’s new …

Brushes

The new brush engine is still in its infancy so I don’t have any good screenshots I’m willing to share at this point. It fully supports “softness” which is a staple of every brush-based drawing programs other than Paint.NET (pre-4.0 Smile). I’ve found it a bit tricky to get good performance within the new rendering engine, but I’ve mostly solved how to do it right (it’s a classic performance vs. memory usage trade-off) and just need to write the actual code. The initial 4.0 release will not support custom brush shapes (“stamps”), but it should be fairly straightforward to add them afterward.

Once the brush engine is in place for the paintbrush tool, I will be able to quickly rebuild the eraser, clone stamp, and recolor tools so they can all have the same features and rendering quality.

Pressure Sensitivity?

I just got a Surface Pro, and it’s pretty slick. More importantly, at least for Paint.NET, is that it has a good Wacom-based stylus/pen with pressure sensitivity. I originally dropped pressure sensitivity in v3.5 because that part of the code was getting in the way of some very important improvements to the input system for the brush tools. That in itself wasn’t a good reason for dropping it, but I had no hardware to test with so I could be sure that I wasn’t breaking pressure sensitivity (or worse). Now I’ve finally got some good hardware for this, so 4.0 might support it, at least for Windows 8 and up since it has new APIs that provide this as a first-class input mechanism. From what I’ve looked at, it’s promising, but I’m still not sure if it’ll work the way I need it to. Cross your fingers.

Shapes

I haven’t stalked about the new Shapes tool yet, which is a cornerstone of the new toolset. Instead of having one tool for each shape (rectangle, circle, etc), there is 1 shape tool and you choose your shape from the toolbar:

Once you’ve drawn a shape you’re free to move, rotate, and resize it. You can also change the shape type or adjust everything else about it (colors, brush size, etc) until you’ve committed it to the layer (and of course, “fine grained history” is fully supported). You can resize the shape using the 8 corner handles, you can move it with the "compass" handle that appears to the lower right of the shape, and you can rotate by placing the mouse between the bottom-right resize handle and the move handle. When you do that, a two-sided curvy arrow appears underneath the mouse cursor to let you know you can drag there to do some rotation:

(You can also move by dragging elsewhere, but the compass handle makes it very obvious as to where you can always drag to move it.)

The handle in the center, which I guess I call “the screw”, can be moved around and lets you redefine what a rotation will use as its center point.

This UI for the resizing, moving, and rotating is the same one that the new Move tools use. Consistency for the user + reusability for the developer = good.

Custom shapes will not be supported in 4.0, but are planned for a release soon after that (sorry y’all, gotta prioritize!). All of the shapes stuff is based on a programming model that’s nearly identical to the Geometry system in WPF/Siverlight/XAML, so once you can add your own shapes it’ll be easy to find examples online with some XAML or path markup which you can then use in Paint.NET.

Not Abandoned

Lastly, to all the people who’ve sent e-mails or left comments asking if Paint.NET is still alive: yes! I just haven’t updated the blog in awhile. I also haven’t made much progress in the last few months because I haven’t had as much time for it; the amount of time I was putting into it was burning me out a bit. But yes, it’s still alive! 4.0 is still on the way, it’s just a really large project that takes a lot of time.

Last month I posted about how you can use Multicore JIT, a .NET 4.5 feature, even if your app is compiled for .NET 4.0. It’s a great feature which can help your app’s startup performance with very little code change: it’s essentially free. In some cases (e.g. ASP.NET) it’s enabled automatically.

But wait, there’s more!

Over on the .NET Framework Blog, Dan Taylor has posted a really good write-up about Multicore JIT with graphs showing the performance improvements when it’s applied to Paint.NET 4.0, Windows Performance Analyzer, and Windows Assessment Console. I was involved in the code changes for each of these (obviously for Paint.NET), which leads into the next link …

We also did a video interview with Vance Morrison (Performance Architect on .NET) which is now posted over at Channel 9: http://channel9.msdn.com/posts/net-45-multicore-jit

This video showcases some of the things I can do with the new rendering engine and tool transaction system in paint.net 4.0. Even the Paint Bucket tool can get awesome “WYSIWIGEWYEI” (What You See Is What You Get … Especially While You’re Editing It, which needs a better name) and Fine-Grained History (you can undo/redo every change, not just those that commit pixels to the layer).

One big annoyance of the Paint Bucket tool in every imaging app out there is that it doesn’t do a good job of letting you explore and be creative. There are two primary “inputs” for it: the origin point (where you clicked), and the tolerance setting. Where you click determines the point at which the flood fill algorithm is executed from, and which color is used as the basis for comparing to other colors to see if they are at a “distance” (Euclidean) that is less than the tolerance value. Colors that are at a distance less than the tolerance are filled in with whatever color or pattern you specify. Black and white are as far apart as possible and require a high tolerance value to “notice” each other, while shades of the same color are computed as relatively close to each other and will be included with lower tolerance values.

What happens in most imaging apps* is that you click somewhere with the Paint Bucket tool, look at the result, and decide that either you wish you’d clicked somewhere else or used another tolerance value. On a rare occasion, it looks perfect and you’re done.

Then you click undo.

Next, you click somewhere else, possibly after editing the tolerance in the toolbar. Then you realize it’s not exactly what you want, so …

Then you click undo. And repeat. And repeat, and repeat, and repeat.

In paint.net 4.0 I’m working to finally get rid of that repetition, which is work I started with the new Gradient tool I added back in 3.0 (2006!). Once you click somewhere with the Paint Bucket tool, you can go edit the tolerance in the toolbar which essentially causes your click action to be re-executed. You can also move your mouse down into the canvas where you clicked, and drag around a handle which will move the origin point. You can change the color, or anything else that affects how the fill is rendered. You can use the undo and redo commands to walk through every adjustment that you’re trying out.

This is a very powerful addition to the tools in paint.net which really enables you to quickly explore the creative landscape in a way that no other image editing software can. It also lets you gain an intuitive understanding of settings that do not necessarily lend themselves to short, intuitive descriptions (like tolerance!), but which are easily learned through interactive exploration. This video was recorded a few weeks ago. Since then I’ve added antialiasing as well as the ability to choose between sampling the current layer or the whole image, and have also made other performance improvements. (I’ve also removed the “old” Paint Bucket tool, which is why you see the “new” version of it sitting at the bottom of the Tools window in the video.)

This is my first video posting, we’ll see how it goes! I didn’t think I could properly discuss this feature with just words and pictures.

* every one that I know of, but I used the word “most” just in case I’m wrong Winking smile

.NET Framework 4.5 contains a very cool new feature called Multi-Core JIT. You can think of it as a profile-guided JIT prefetcher for application startup, and can read about it in a few places …

I’ve been using .NET 4.0 to develop Paint.NET 4.0 for the past few years. Now that .NET 4.5 is out, I’ve been upgrading Paint.NET to require it. However, due to a circumstance beyond my control at this moment, I can’t actually use anything in .NET 4.5 (see below for why). So Paint.NET is compiled for .NET 4.0 and can’t use .NET 4.5’s features at compile time, but as it turns out they are still there at runtime.

I decided to see if it was possible to use the ProfileOptimization class via reflection even if I compiled for .NET 4.0. The answer: yes! You may ask why you’d want to do this at all instead of biting the bullet and requiring .NET 4.5. Well, you may need to keep your project on .NET 4.0 in order to maintain maximum compatibility with your customers who aren’t yet ready (or willing Smile) to install .NET 4.5. Maybe you’d like to use the ProfileOptimization class in your next “dot release” (e.g. v1.0.1) as a free performance boost for those who’ve upgraded to .NET 4.5, but without displacing those who haven’t.

So, here’s the code, which I’ve verified as working just fine if you compile for .NET 4.0 but run with .NET 4.5 installed:

using System.Reflection;

Type systemRuntimeProfileOptimizationType = Type.GetType("System.Runtime.ProfileOptimization", false);
if (systemRuntimeProfileOptimizationType != null)
{
    MethodInfo setProfileRootMethod = systemRuntimeProfileOptimizationType.GetMethod("SetProfileRoot", BindingFlags.Static | BindingFlags.Public, null, new Type[] { typeof(string) }, null);
    MethodInfo startProfileMethod = systemRuntimeProfileOptimizationType.GetMethod("StartProfile", BindingFlags.Static | BindingFlags.Public, null, new Type[] { typeof(string) }, null);

    if (setProfileRootMethod != null && startProfileMethod != null)
    {
        try
        {
            // Figure out where to put the profile (go ahead and customize this for your application)
            // This code will end up using something like, C:\Users\UserName\AppData\Local\YourAppName\StartupProfile\
            string localSettingsDir = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
            string localAppSettingsDir = Path.Combine(localSettingsDir, "YourAppName");
            string profileDir = Path.Combine(localAppSettingsDir, "ProfileOptimization");
            Directory.CreateDirectory(profileDir);

            setProfileRootMethod.Invoke(null, new object[] { profileDir });
            startProfileMethod.Invoke(null, new object[] { "Startup.profile" }); // don’t need to be too clever here
        }

        catch (Exception)
        {
            // discard errors. good faith effort only.
        }
    }
}

I’m not sure I’ll be using this in Paint.NET 4.0 since it uses NGEN already, but it’s nice to have this code snippet around.

So, why can’t I use .NET 4.5? Well, they removed support for Setup projects (*.vdproj) in Visual Studio 2012, and I don’t yet have the time or energy to convert Paint.NET’s MSI to be built using WiX. I’m not willing to push back Paint.NET 4.0 any further because of this. Instead, I will continue using Visual Studio 2010 and compiling for .NET 4.0 (or maybe I’ll find a better approach). However, at install time and application startup, it will check for and require .NET 4.5. The installer will get it installed if necessary. Also, there’s a serialization bug in .NET 4.0 which has dire consequences for images saved in the native .PDN file format, but it’s fixed in .NET 4.5 (and for .NET 4.0 apps if 4.5 just happens to be what’s installed).

I finally succumbed and bought a copy of Diablo 3 today, only to found out that it just doesn’t work:

Argh! No matter what I did, it would always crash. Every single time, over and over and over and over again.

In a last act of desperation before borrowing the DVD from a friend to try and load it that way, I had some Raymond Chen style psychic insight and thought it might be a multithreading bug. You see, I just put together a brand new Dual Xeon E5-2687W system. It is a beast: dual processor, 8 cores each, with HyperThreading. That means Task Manager shows 32 tiny little performance graphs. It makes compiling Paint.NET really fast (lots of C++/CLI these days), and is killer for working on all that multithreaded rendering code.

Anyway, the fix is a bit clumsy but it seems to work (so far! we’ll see if it still works after all the downloading is done):

  1. Download the “Diablo-III-Setup-enUS.exe” as usual, from Blizzard’s website.
  2. Run it, as usual (double click on it).
  3. When you get the UAC prompt, do NOT click Yes (yet).
  4. Instead, open up Task Manager and find the program in the “Processes” tab (Diablo-whatever.exe)
  5. Right click on it and then click on the “Set Affinity…” command.
  6. Make sure only 1 of the CPU’s checkboxes is enabled. If you’re on Windows 7, just click the “<All Processors>” node to uncheck everything, and then click on “CPU 0” to enable it. This will lock the program to just 1 CPU core/thread, minimizing the risk of the hypothesized multithreading bug.
  7. Now you can click on Yes in the UAC prompt… and tada, it should work.

I found some battle.net forum threads where tons of people are having this issue, and it goes on and on for pages and pages without any fix for the poor souls (so to speak).

Once it starts downloading you’ll probably want to do the same thing for “Blizzard Launcher.exe” except that this time you’ll 1) have to click the “Show processes from all users” button (bottom of Task Manager in the Processes tab), and then 2) enable all CPUs instead of having any of them disabled.

Hope this helps anyone else who’s having this frustrating problem.

Update: Once Diablo 3 finished downloading, it still would not start after clicking the Play button. “Diablo III.exe” would pop up in Task Manager, and then silently disappear a few seconds later. According to the Windows Event Viewer, it was crashing. However, I did get it to work, and the trick is to “Set Affinity” on explorer.exe and give it something like 4 of the CPU cores. Since processor affinity is inherited, running Diablo 3 from within Windows Explorer (aka your desktop) now works. Hey Blizzard! Try testing on something more than a dual core Pentium D!

I’ve come up with a trick that can be used in some very specific scenarios in order to avoid extra array copying when calling into native code from managed code (e.g. C#). This won’t usually work for regular P/Invokes into all your favorite Win32 APIs, but I’m hopeful it’ll be useful for someone somewhere. It’s not even evil! No hacks required.

Many native methods require the caller to allocate the array and specify its length, and then the callee fills it in or returns an error code indicating that the buffer is too small. The technique described in this post is not necessary for those, as they can already be used optimally without any copying.

Instead, let’s talk about the general problem if you’re calling a native method which does the array allocation and then returns it. You can’t use it as a “managed array” unless you copy it into a brand new managed array (don’t forget to free the native array). In other words, native { T* pArray; size_t length; } cannot be used as a simple managed T[] as-is (or even with modification!). The managed runtime didn’t allocate it, won’t recognize it, and there’s nothing you can do about it. Very few managed methods will accept a pointer and a length, and will require a managed array. This is particularly irksome when you want to use System.IO.Stream.Read() or Write() with bytes from a native-side buffer.

Paint.NET uses a library written in classic C called General Polygon Clipper (GPC), from The University of Manchester, to perform polygon clipping. This is used for, among other things, when you draw a selection with a mode such as add (union), subtract (exclude), intersect, and invert (“xor”). I blogged about this 4 years ago when version 3.35 was about to be released: using GPC made these operations immensely faster, and I saved a lot of time and headache by purchasing a commercial use license for the library and then integrating it into the Paint.NET code base. tl;dr: The algorithms for doing this are nontrivial and rife with special corner cases, and I’d been struggling to find enough sequential time to implement and debug it on my own.

Anyway, the data going into and coming out of GPC is an array of polygons. Each polygon is an array of points, each of which is just a struct containing X and Y as double-precision floating point values. To put it simply, it’s just a System.Windows.Point[][] (I actually use my own geometry primitives nowadays, but that’s another story, and it’s the same exact thing).

Getting this data into GPC from the managed side is easy. You pin every array, and then hand off the pinned pointers to GPC. Since you can’t use the “fixed” expression with a dynamic number of elements, I use GCHandle directly and stuff them all into GCHandle[] arrays for the duration of the native call. This is great because on the managed side I can work with regular ol’ managed arrays, and then send them off to GPC as “native arrays” by pinning them and using the pointers obtained from GCHandle.AddrOfPinnedObject().

Now, here’s the heart breaking part. GPC allocates the output polygon using good ol’ malloc*. So when I get the result back on the managed side, I must copy every single last one so that I can use it as a Point[] (a managed array). This ends up burning a lot of CPU time, and can cause virtual address space claustrophobia on 32-bit/x86 systems when working with complex selections (e.g. Magic Wand), as you must have enough memory for 2 copies of the result while you’re doing the copying. (Or you could free each native array after you copy it into a managed array, but that’s an optimization for another day, and isn’t as straightforward as you’d think because freeing the native memory requires another P/Invoke, and those add up, and so it might not actually be an optimization.)

But wait, there’s another way! Since the code for GPC is part of my build, I can modify it. So I added an extra parameter called gpc_vertex_calloc:

    typedef (gpc_vertex *)(__stdcall * gpc_vertex_calloc_fn)(int count);

    GPC_DLL_EXPORT(
    void, gpc_polygon_clip)(
        gpc_op                set_operation,
        gpc_polygon          *subject_polygon,
        gpc_polygon          *clip_polygon,
        // result_polygon holds the arrays of gpc_vertex, aka System.Windows.Point)
        gpc_polygon          *result_polygon, 
        gpc_vertex_calloc_fn  gpc_vertex_calloc);

("gpc_vertex” is GPC’s struct that has the same layout as System.Windows.Point: X and Y, defined as a double.)

In short, I’ve changed GPC so that is uses an external allocator by passing in a function pointer it should use instead of malloc. And now if I want I can have it use malloc, HeapAlloc, VirtualAlloc, or even the secret sauce detailed below.

On the managed side, the interop delegate definition for gpc_vertex_calloc_fn gets defined as such:

    [UnmanagedFunctionPointer(CallingConvention.StdCall)]
    public delegate IntPtr gpc_vertex_calloc_fn(int count);

And gpc_polygon_clip’s interop defintion is like so:

    [DllImport(“PaintDotNet.SystemLayer.Native.x86.dll”, CallingConvention = CallingConvention.StdCall)]
    public static extern void gpc_polygon_clip(
        [In] NativeConstants.gpc_op set_operation,
        [In] ref NativeStructs.gpc_polygon subject_polygon,
        [In] ref NativeStructs.gpc_polygon clip_polygon,
        [In, Out] ref NativeStructs.gpc_polygon result_polygon,
        [In] [MarshalAs(UnmanagedType.FunctionPtr)] NativeDelegates.gpc_vertex_calloc_fn gpc_vertex_calloc);

So, we’re half way there, and now we need to implement the allocator on the managed side.

    internal unsafe sealed class PinnedManagedArrayAllocator<T>
        : Disposable
          where T : struct
    {
        private Dictionary<IntPtr, T[]> pbArrayToArray;
        private Dictionary<IntPtr, GCHandle> pbArrayToGCHandle;

        public PinnedManagedArrayAllocator()
        {
            this.pbArrayToArray = new Dictionary<IntPtr, T[]>();
            this.pbArrayToGCHandle = new Dictionary<IntPtr, GCHandle>();
        }
 
        // (Finalizer is already implemented by the base class (Disposable))

        protected override void Dispose(bool disposing)
        {
            if (this.pbArrayToGCHandle != null)
            {
                foreach (GCHandle gcHandle in this.pbArrayToGCHandle.Values)
                {
                    gcHandle.Free();
                }

                this.pbArrayToGCHandle = null;
            }

            this.pbArrayToArray = null;

            base.Dispose(disposing);
        }

        // Pass a delegate to this method for “gpc_vertex_calloc_fn”. Don’t forget to use GC.KeepAlive() on the delegate!
        public IntPtr AllocateArray(int count)
        {
            T[] array = new T[count];
            GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);
            IntPtr pbArray = gcHandle.AddrOfPinnedObject();
            this.pbArrayToArray.Add(pbArray, array);
            this.pbArrayToGCHandle.Add(pbArray, gcHandle);
            return pbArray;
        }

        // This is what you would use instead of, e.g. Marshal.Copy()
        public T[] GetManagedArray(IntPtr pbArray)
        {
            return this.pbArrayToArray[pbArray];
        }
    }

(“Disposable” is a base class which implements, you guessed it, IDisposable, while also ensuring that Dispose(bool) never gets called more than once. This is important for other places where thread safety is very important. This class is specifically not thread safe, but it should be reasonably easy to make it so.)

And that’s it! Well, almost. I’m omitting the guts of the interop code but nothing that should inhibit comprehension of this part of it. Also, the above code is not hardened for error cases, and should not be used as-is for anything running on a server or in a shared process. Oh, and I just noticed that my Dispose() method has an incorrect implementation, whereby it shouldn’t be using this.pbArrayToGCHandle, specifically it shouldn’t be foreach-ing on it, and should instead wrap that in its own IDisposable-implementing class … exercise for the reader? Or I can post a fix later if someone wants it.

After I’ve called gpc_polygon_clip, instead of copying all the arrays using something like System.Runtime.InteropServices.Marshal.Copy(), I just use GetManagedArray() and pass in the pointer that GPC retrieved from its gpc_vertex_calloc_fn, aka AllocateArray(). When I’m done, I dispose the PinnedManagedArrayAllocator and it unpins all the managed arrays. And this is much faster than making copies of everything.

Now, this isn’t the exact code I’m using. I’ve un-generalized it in the real code so I can allocate all of the arrays at once instead of incurring potentially hundreds of managed <—> native transitions for each allocation. The above implementation also doesn’t have a “FreeArray” method; I had one, but I ended up not needing it, so I removed it.

So the next time you find yourself calling into native code which either 1) allows you to specify an external allocator, or 2) is part of your build, and that 3) involves lots of data and thus lots of copying and wasted CPU time, you might just consider using the tactic above. Your users will thank you.

Lastly, I apologize for my blog’s poor code formatting.

Legal: I hereby place the code in this blog post into the public domain for anyone to do whatever they want with. Attribution is not required, but I certainly appreciate if you send me an e-mail or post a comment and let me know it was useful for you.

* Actually in Paint.NET 3.5, I changed it to use HeapAlloc(). This way I can get an exception raised when it runs out of memory, instead of corrupt results. This does happen on 32-bit/x86, especially when using the Magic Wand on large images.

It’s been awhile since I talked about some of the smaller features that have been implemented for Paint.NET 4.0. So, without further ado …

Light Color Scheme

Paint.NET 3.5 uses a blue color scheme. For 4.0, you can still use that but the default is now the “Light” color scheme. The differences can be subtle but change is nice to have. The light theme also uses a gray canvas background (#CFCFCF to be precise), which can be important for color matching.

 
 

Color Picker Enhancements

Ed Harvey, who wrote and has been maintaining "Ed Harvey Effects,” one of the most popular and interesting plugin packs, has contributed some more features to Paint.NET 4.0 recently. The first two are in the Color Picker and give you the ability to set the sampling size as well as whether it will sample just the current layer or the whole image:

Copy Merged

Ed Harvey is also responsible for implementing another highly requested feature, Copy Merged. When you have a selection active, Edit->Copy will take the pixels from the current layer, while Edit->Copy Merged will use the whole image. In Paint.NET v3.5 you could do this but it required you to 1) Flatten the image, 2) Copy, and finally 3) Undo the Flatten. Paint.NET 4.0 will let you do that in one keystroke, and mirrors Photoshop’s functionality and keyboard shortcut. It also means you don’t have to wipe out your Redo history.

Tool Blending Modes

Paint.NET has always had an option to let you choose between Normal and Overwrite blending. The latter is necessary if you ever want to use anything but the Eraser tool in order to “draw transparent pixels.” This has been extended to include all of the layer blend modes, and still includes Overwrite. Currently this only works on the tools which have been upgraded to the new rendering system, namely the Pencil and Gradient tools, but all the others will be upgraded in due time. (I have already started upgrading the shape tools, for instance.)

Here’s an example comparing Normal and Xor blending modes with a rounded rectangle*:

Layer Reordering with Drag-and-Drop

In Paint.NET v3.5 you have to use the cumbersome Move Layer Up and Move Layer Down buttons to change layer ordering. Paint.NET 4.0 adds what you would naturally want to do here, namely the ability to just drag-and-drop the layers to reorder them. In addition, there are some nice animations for this and all the other things that can change the contents of the Layers window.

Antialiased Selections

Whenever you have a selection active, all drawing is clipped to it. Paint.NET 4.0 can finally do this clipping with antialiasing. This results in a much smoother edge. This was actually quite simple to implement with the new rendering engine that’s in place for 4.0. (Note: Feathered selections and other gizmos are another matter entirely and will hopefully make it into a post-4.0 release without too much of a wait.)

The first option gives you the same rendering that Paint.NET v3.5 and earlier uses. The second uses 2×2 super sampling on the clipping mask, and the third uses 3×3 super sampling. I experimented with 4×4 super sampling but the improvement wasn’t very noticeable; in addition, performance went down and memory usage went up.

Here’s an example of the quality levels with a circular selection that’s had a gradient drawn inside of it:

Right now the default is Antialiased (2×2 super sampling). I’ll be doing some further experimenting, and decide whether the default should be High Quality and whether the “normal quality” option should even be present.

Anyway, that’s all for now!

* Astute readers may notice that the rounded rectangle’s corner radius does not match what 3.5 uses … yes, this will finally be configurable. Right now I’ve just got a test tool that renders a fixed size, but in short order the shape tools will get some fantastic upgrades, including configuring the corner radius for a rounded rectangle.

This is just a little note to explain why the forum may not be accessible for a little while. It’s moving to a better server, although it’ll have the same http:// location.

You shouldn’t have to do anything other than be please be patient for the next day or so, and then things should be back to normal once all the new DNS stuff propagates.

Apparently we were put on the wrong server during the last migration, and just recently we’ve been bumping into all of its CPU usage limitations :)

Follow

Get every new post delivered to your Inbox.

Join 262 other followers