How to fix: Paint.NET "breaks" with Vista SP2 Beta

I’ve had some reports that installing the Windows Vista SP2 beta (or “CPP”) breaks Paint.NET v3.36.

You’ll get an error message like so:

Contrary to the error, Paint.NET v3.36 does not require .NET Framework 3.5 SP1.

There are two ways to fix this:

1. Install .NET Framework 3.5 SP1. I recommend doing this anyway, because it has numerous fixes and performance improvements that make Paint.NET happy.

2. Go to the directory you installed Paint.NET, and remove all the files with the “.exe.config” extension. This will un-confuse the .NET loader stuff.

This seems to be something related to the .NET Client Profile, although I’m not sure what the root cause is. I’ll be reporting this bug to the right people, so that it can be fixed.

Installing .NET 3.5 SP1: Please wait … Forever!

The very cool thing about Paint.NET v3.5 is that it installs quite fast on a fresh Windows XP SP2 machine. And that includes the installation of prerequisites like Windows Installer 3.1 and the Client Profile version of the .NET Framework 3.5 SP1. Even on my new little Atom 330 box* it is kind of pleasantly fast. I’d even say it’s fun. (The unfortunate thing is that Paint.NET v3.5 is not yet out of “pre-alpha” …)

Intel BOXD945GCLF2 Atom 330 Mini ITX Motherboard/CPU Combo

Intel BOXD945GCLF2 Atom 330 Mini ITX Motherboard/CPU Combo

Intel BOXD945GCLF2 Atom 330 Intel 945GC Mini ITX Motherboard/CPU Combo


Once you jump over to Windows Vista, the story becomes very very very very dire. It took a full hour to install .NET 3.5 SP1. The hard drive was thrashing and yelling the entire time, and CPU usage was quite high. In the middle of this, a Windows Update dialog popped up in the corner telling me I needed to restart. That sounds like a bad idea since I’m still in the middle of installing a new system component! This paints a very bleak picture for getting .NET 3.5 SP1 and Paint.NET v3.5 successfully deployed to the large userbase that I have currently sitting on .NET 2.0 and Paint.NET v3.36. I’m afraid that most users will see the .NET installer “hanging” at 40% and just restart their computer, or cancel it, or kill it using Task Manager. How fun will it be for users to click on “Install Update” only to have to wait an hour before they can use their computer again, let alone Paint.NET?

I honestly don’t think it’s worth 1 hour to install a 2 MB program. Even Adobe Photoshop and Mathematica 7.0 install in minutes, and they are hundreds of megabytes.

This isn’t a random or one-off occurrence — this is not the first time I’ve seen this. Almost every time I’ve installed .NET 3.5 SP1 on to any system, whether it’s mine or someone else’s, the same thing happens. It doesn’t matter if it’s an Atom or a brand new 3.0GHz Core 2 Duo, it still takes one full hour. Sometimes you can actually get the installation to complete quickly if you go and make sure that Windows Update is completely caught up. Even then, you can never be completely sure. Any system that isn’t used 8+ hours/day by a computer-industry professional like myself is likely to be at least 1 update behind. (I’ll bet a Core i7 965 could do it in 45 minutes though :))

This is very frustrating, to say the least. On the positive side I know some of the people who work on this stuff, and they’re all great people who want things to be awesome. You can be sure I’ll be e-mailing them soon πŸ™‚ And with any luck, the “GDR” update that’s coming (soon?) will have already fixed this. Cross your fingers.

Performance of the Atom 330 is actually surprisingly good. The results of 32-bit PdnBench are almost exactly the same as a Pentium 4 3.0 GHz “E” Prescott chip — about 180 seconds for completion — which is impressive to say the least. Back in the day (2004) that P4 chip consumed so much power that some reviewers melted their motherboards, whereas this Atom barely even needs a heatsink. In 64-bit mode, the Atom 330 pulls ahead to 155 seconds. Those results use 2 threads on the P4 (single core w/ HyperThreading), and 4 on the Atom (dual core w/ HyperThreading).

* Actually it’s not really a box. It’s small, and not inside of a case. Maybe “kit” would be a better term?

** Yes, I’m testing out some newegg.com affiliate stuff. If you’re interested in the Atom 330 board listed above, then please click on the “Buy” button above. Just like Amazon affiliate links, if you buy it via that link then I get a tiny amount of the purchase price. It doesn’t cost you anything extra. It’s another way to support Paint.NET πŸ™‚

Goodbye Pentium 4, Hello Atom

Sadly, I fried my Pentium 4 test system a few days ago, which had proven invaluable in my performance testing of Paint.NET v3.5. I went to turn it on* and the screwdriver missed by a few millimeters, shorted the wrong pins, and … bzzzt. No more P4.

* Since this system was “bread boxed,” meaning that it wasn’t inside of a case or anything, turning it on involved shorting the two pins that the power button is normally wired directly straight to.

Fortunately I have one of these on the way from newegg. Along with twenty dollars worth of RAM (2 GB), I will soon have a new performance test bed.

It’s a motherboard with a soldered-on Intel Atom 330 CPU for $80. It’s dual-core, supports 64-bit, and has HyperThreading. And it runs in a small 8W power envelope (well, the CPU itself anyway).

Think about it: for $80 you can get started with a system that supports 4 hardware threads! I will probably disable the second core and HyperThreading, as my primary purpose is low-end, single-core performance testing. It will be interesting to see how the Atom scales with HyperThreading and the second core turned on.

My main complaint is that this motherboard only has VGA output: DVI is not an option. For what I’m using it for, this won’t matter, but it certainly prevents me from recommending it to others, especially for HTPC / Media Center systems.

Maybe in a few months I’ll be able to purchase a Dual Xeon based on the Nehalem/Core i7 architecture. 2 chips, 8 cores, 16 threads … we’ll pit it against the Atom and see who wins πŸ˜‰

Paint.NET v3.5: "Improved rendering quality when zoomed in"

Brad Wolff recently wrote a comment on my earlier post, “Change of plans – here comes Paint.NET v3.5” :

“Rick – You mentioned that 3.5 will have β€œImproved rendering quality when zoomed in”. Can you elaborate on this? My fear is that we will end up having to look at the blurred mess that Windows Picture Viewer displays when zoomed in. Please tell me I am wrong!” — Brad Wolff

Brad, you’re wrong πŸ™‚ And it’s in a good way. Paint.NET v3.5 does not use bilinear or bicubic resampling when zooming in, which is the cause of the blurred mess that you mention in Windows Picture Viewer. In fact, it is now using the same resampling algorightm for zooming in that has been employed for zooming out: rotated grid supersampling. The old resampling method was the simple nearest neighbor one. It was very fast, especially when paired with a lookup table for avoiding a per-pixel division operation. The image quality problem with nearest-neighbor is very apparent between 101% and 199% zoom levels: you end up with a moire of 1-pixel wide and 2-pixel wide samples and it just looks awful. With supersampling, we are able to achieve a smoothed look that does not blur as you zoom in.

Here’s an example from Paint.NET v3.36, where I’ve drawn a normal circle and some scribbles with the paintbrush tool. The zoom level was then set to 120%:

Here’s the same, but in Paint.NET v3.5:

At this zoom level, each pixel from the image should be drawn as “1.2” pixels on-screen. In v3.36, this entails drawing 4 pixels at 1-pixel width, and then a fifth pixel at 2-pixel width. Put another way, every 5th pixel is doubled in size. In v3.5, each source pixel ends up with a uniform width and the overlaps are smoothed together in a much more pleasing manner. (This is done on the y-axis as well — replace ‘width’ with ‘height’ in the preceding paragraph and it’s also true.) It will still maintain a “pixelated” appearance as you continue zooming in, which is what you want, but the edges between samples will look smoother.

This does come at a performance cost, but I believe it’s worth it. It also scales well with multiple cores, so it’s something that will be faster with each new CPU upgrade. I’ve also experimented with using bilinear and bicubic resampling — it’s fun, but too expensive and blurry. You would need an 8-core system for it to be comfortable.

A fluent approach to C# parameter validation

Fluent programming gets a bad reputation, since some developers like to write code like the following:

var time = 7.Days().Plus(4.Hours())

Barf. However, when used properly, I think it’s very powerful. Let’s look at a typical method with some parameter validation:

// Copy src[srcOffset, srcOffset + length) into dst[dstOffset, dstOffset + length)
public static void Copy<T>(T[] dst, long dstOffset, T[] src, long srcOffset, long length)
{
    if (dst == null)
        throw new ArgumentNullException(“dst”);

    if (src == null)
        throw new ArgumentNullException(“src”);

    if (dstOffset + length > dst.Length || dstOffset < 0)
        throw new ArgumentOutOfRangeException(
            “dst, dstOffset, length”,
            string.Format(“dst range is not within bounds, ({0} + {1}) > {2}”, dstOffset, length, dst.Length));

    if (srcOffset + length > src.Length || srcOffset < 0)
        throw new ArgumentOutOfRangeException(
            “src, srcOffset, length”,
            string.Format(“src range is not within bounds, ({0} + {1}) > {2}”, srcOffset, length, src.Length));

    if (length < 0)
        throw new ArgumentOutOfRangeException(“length”, “length must be >= 0, ” + length.ToString());

    for (int di = dstOffset; di < dstOffset + length; ++di)
        dst[di] = src[di – dstOffset + srcOffset];

}

That’s a lot of code for parameter validation, but in a robust system it is necessary. For debugging purposes, having all the information in there with the actual parameter values is invaluable, so that you can get a stack trace that tells you, “Length was too big. It was 50, but the max was 49.”

The problem here is twofold. One, code like this gets sprinkled all over the codebase of a large application and so it gets repetitive, tiresome, and is a bug hazard. Having an off-by-1 error is many times worse if it’s in your validation code. Or, because it’s tiresome, sometimes there just won’t be any validation.

The second problem is actually much more subtle. Ask yourself this: if both src and dst are null, what exception does the caller get? (and subsequently, what goes into the crash log or Watson upload?) It will only tell you that dst is null. This leads to more iterations in debugging than is optimal, where you fix the problem of dst equaling null only to immediately get it crashing on you again when src is null. If the exception told you about both errors, you could have saved a lot of time.

This happens more often than I’d like when debugging issues on other people’s systems, especially ones I don’t have any direct access to (physical or remote, ala Remote Desktop). The end-user will post a Paint.NET crashlog to the forum, I’ll fix it and send them a patch or new build, and then the same method will crash on the very next line of code. This is especially relevant to methods for graphics which take parameters for stuff like width, height, location, bounding box, etc. The X value may be bad, but the Y value might also be bad. I need to know about both, along with the valid ranges (and not just “out of range”).

There are times where I have fixed issues with no direct interaction with a user: if I get a bunch of crash logs for a certain issue, but I can’t reproduce it, I have often been able to fix it by incorporating a hopeful and conservative fix into the next release and then monitoring to make sure that no more crash logs come in. And yes, I’ve done that many times with Paint.NET.

Reporting an aggregated judgement like this is just not fun. To go the extra mile you need to create a StringBuilder, decide on the preciding exception type, manage concatenation of multiple parameter names (“sentence-ization”), etc. Like this …

public static void Copy<T>(T[] dst, long dstOffset, T[] src, long srcOffset, long length)
{
    StringBuilder sb = new StringBuilder();
       
    if (dst == null)
        sb.Append(“dst. “);

    if (src == null)
        sb.Append(“src. “);

    if (sb.Length > 0)
        throw new ArgumentNullException(sb.ToString());

    if (dstOffset + length > dst.Length || dstOffset < 0)
        …

    if (srcOffset + length > src.Length || srcOffset < 0)
        …

    if (length < 0)
        …

    if (sb.Length > 0)
        throw new ArgumentOutOfRangeException(sb.ToString());

    …
}

Boo. This is still tiresome, and creates extra objects, etc. Because of the extra work involved, this tends to be done reactively instead of proactively. Only the “hot” methods get the comprehensive logic.

I’ve come up with another method. Check this out:

public static void Copy<T>(T[] dst, long dstOffset, T[] src, long srcOffset, long length)
{
    Validate.Begin()
            .IsNotNull(dst, “dst”)
            .IsNotNull(src, “src”)
            .Check()
            .IsPositive(length)

            .IsIndexInRange(dst, dstOffset, “dstOffset”)
            .IsIndexInRange(dst, dstOffset + length, “dstOffset + length”)
            .IsIndexInRange(src, srcOffset, “srcOffset”)
            .IsIndexInRange(src, srcOffset + length, “srcOffset + length”)
            .Check();

    for (int di = dstOffset; di < dstOffset + length; ++di)
        dst[di] = src[di – dstOffset + srcOffset];
}

Yow! Ok that’s much easier to read. And here’s the kicker: if no problems are found with your parameters, then no extra objects are allocated. The cost for this pattern is only in the extra method calls.

There are three classes involved here: Validate, Validation, and ValidationExtensions. Here’s the Validate class:

public static class Validate
{
    public static Validation Begin()
    {
        return null;
    }
}

That was easy. This allows us to not allocate a “Validation” object, and its enclosed fields, until we actually encounter a problem. The presiding philosophy in code that uses exception handling is to optimize for the non-exceptional code path, and that’s exactly what we’re doing here. Here’s the Validation class:

public sealed class Validation
{
    private List<Exception> exceptions;

    public IEnumerable<Exception> Exceptions
    {
        get
        {
           
return this.exceptions;
       
}
    }

    public Validation AddException(Exception ex)
    {
        lock (this.exceptions)
        {
            this.exceptions.Add(ex);
        }

        return this;
    }

    public Validation()
    {
        this.exceptions = new List<Exception>(1); // optimize for only having 1 exception
    }
}

It’s basically just a list of exceptions. AddException() returns ‘this’ to make some of the code in the ValidationExtensions class easier to write. Check it out:

public static class ValidationExtensions
{
    public static Validation IsNotNull<T>(this Validation validation, T theObject, string paramName)
        where T : class
    {
        if (theObject == null)
            return (validation ?? new Validation()).AddException(new ArgumentNullException(paramName));
        else
            return validation;
    }

    public static Validation IsPositive(this Validation validation, long value, string paramName)
    {
        if (value < 0)
            return (validation ?? new Validation()).AddException(new ArgumentOutOfRangeException(paramName, “must be positive, but was ” + value.ToString()));
        else
            return validation;
    }

    …

    public static Validation Check(this Validation validation)
    {
        if (validation == null)
            return validation;
        else
        {
            if (validation.Exceptions.Take(2).Count() == 1)
                throw new ValidationException(message, validation.Exceptions.First()); // ValidationException is just a standard Exception-derived class with the usual four constructors
            else
                throw new ValidationException(message, new MultiException(validation.Exceptions)); // implementation shown below
        }
    }
}

The sum of these collections allows us to write validation code in a very clean and readable format. It reduces friction for having proper validation in more (or all? :)) methods, and reduces the bug hazard of either incorrect or omitted validation code.

Missing from this implementation, and other kinks to work out:

  • Could use lots of additional methods within ValidationExtensions. (some were omitted for brevity in this blog post)
  • Calling ValidationExtensions.Check() is itself not validated. So, if you forget to put a call to it at the end of your validation expression then the exception will not be thrown. Often you’ll end up plowing into a null reference and getting a NullReferenceException, especially if you were relying on ValidationExtensions.IsNotNull(), but this isn’t guaranteed for the other validations (esp. when dealing with unmanaged data types). It would be simple to add code to Validation to ensure that its list of exceptions was “observed”, and if not then in the finalizer it could yell and scream with an exception.
  • The exception type coming out of any method that uses this will be ValidationException. This isn’t an issue for crash logs, but it is for when you call a method and want to discriminate among multiple exception types and decide what to do next (e.g., FileNotFoundException vs. AccessDeniedException). I’m sure there’s a way to fix that, with better aggregation, and (hopefully) without reflection.
  • Should probably change the IEnumerable<Exception> in Validation to be Exception[].

Here’s the implementation of MultiException, as promised in the code above. And, in fact, it’s incomplete because it does not print all of the exceptions in a ToString() type of call. Umm … how about I leave that as an exercise for the reader? πŸ™‚

[Serializable]
public sealed class MultiException
    : Exception
{
    private Exception[] innerExceptions;

    public IEnumerable<Exception> InnerExceptions
    {
        get
        {
            if (this.innerExceptions != null)
            {
                for (int i = 0; i < this.innerExceptions.Length; ++i)
                {
                    yield return this.innerExceptions[i];
                }
            }
        }
    }

    public MultiException()
        : base()
    {
    }

    public MultiException(string message)
        : base()
    {
    }

    public MultiException(string message, Exception innerException)
        : base(message, innerException)
    {
        this.innerExceptions = new Exception[1] { innerException };
    }

    public MultiException(IEnumerable<Exception> innerExceptions)
        : this(null, innerExceptions)
    {
    }

    public MultiException(Exception[] innerExceptions)
        : this(null, (IEnumerable<Exception>)innerExceptions)
    {
    }

    public MultiException(string message, Exception[] innerExceptions)
        : this(message, (IEnumerable<Exception>)innerExceptions)
    {
    }

    public MultiException(string message, IEnumerable<Exception> innerExceptions)
        : base(message, innerExceptions.FirstOrDefault())
    {
        if (innerExceptions.AnyNull())
        {
            throw new ArgumentNullException();
        }

        this.innerExceptions = innerExceptions.ToArray();
    }

    private MultiException(SerializationInfo info, StreamingContext context)
        : base(info, context)
    {
    }
}

What if XP SP3 were the minimum OS?

Currently, the minimum version of Windows that Paint.NET will run on is XP SP2. Unfortunately, it’s starting to show it’s age and it’s making a big hassle for the installer. The issue is that a “fresh” installation of XP SP2 does not have Windows Installer 3.1, whereas XP SP3 does. I have all sorts of custom code to detect this, and special packaging rules for creating my ZIP files and self-extractors. It adds about 2MB to the Paint.NET v3.5 download, although it greatly improves the user experience and reduces friction for getting our favorite freeware installed. I was hoping to get the .NET 3.5 Client Profile installer to auto-download Windows Installer 3.1, but unfortunately it has a hard block on this before it even starts to parse the Products.XML file which contains the installation manifest and logic.

If I were to set the minimum system requirement to be XP SP3, then it would greatly simplify things!

There’s no charge to upgrade from XP SP2 to XP SP3. So, why isn’t everyone using it yet? I have a thread over on the forum where I’m asking any XP SP2 users to reply and tell me why they haven’t upgraded to XP SP3 yet. So far the reasons are: dial-up, too busy, and “didn’t see a reason to.” (actually that last one came to me via a private message, so you won’t see it on the forum)

I’d like to extend the discussion to this blog: if you haven’t upgraded from XP SP2 to XP SP3, please post a comment and let me know why. I’m not trying to make judgements here, so please don’t be shy — I’m simply on a fact-finding mission. The sooner I can bump up the minimum requirement to XP SP3, the better things will be: the download size will go down, I can spend more time on other engineering tasks, less time testing, and I can drink more beer. All three of these make someone happier.

This also brings to light the issue of prerequisite management on Windows, and for freeware apps. First, why isn’t it easier to deal with prerequisite OS components? Second, in the eyes of a typical user, what leverage or authority does a 1.5MB freeware (Paint.NET) have in dictating what service pack level you should have installed? If Photoshop were to require SP3, you can bet that a user who just paid $650 is going to install it so that they can get their money’s worth! And it probably isn’t a good idea (or feasible!) for Paint.NET to auto-download and install an entire service pack. Which means that the user experience involves the trusty message box that says, “You don’t have ___insert stupid computer nerd babble here___. Click Yes to do something even more confusing, or No to go back to what you were doing before.”

An exploit requiring admin privilege is NOT an exploit

I’m going to pick on a post that I saw on the forum recently, “Root kits for .NET framework been found” [sic]. Now, I believe this person was just doing due diligence and reporting something they thought might honestly be important. So, “sharpy” (if that is your real name!), this is not meant as a dig on you. The post points to another forum discussion at dslreports.com, which then has some other links I’ll let you explore yourself.

In short, the author of some paper or exploit is claiming they have hacked the .NET Framework such that they can bypass strong-name validation, or replace code in mscorlib.dll, etc. I’ll publish the first line of the first reply to the post on dslreports:

“The ‘exploit’ starts with the modification of a framework dll (assembly) from outside the runtime using administrative privileges.”

Spot the refutal? I put it in bold πŸ™‚ It’s like Raymond Chen has blogged about on at least one occasion:

“Surprisingly, it is not a security vulnerability that administrators can add other users to the Administrators group.”

Here’s a pop quiz. If you have administrator access to someone else’s machine, which of the following would you do?

  1. Format the hard drive.
  2. Steal data, then format the hard drive.
  3. Display a dialog box saying, “Gotcha!”, and then format the hard drive.
  4. Decompile mscorlib.dll, inject extra code into the IL for the Assembly.Load() method, recompile the new IL into a new mscorlib.dll, replace the existing mscorlb.dll with your hacked version, edit the system configuration to bypass verification, remove the optimized “NGEN” version of mscorlib.dll, delete the pertinent log entries to cover your tracks, and then wait an undetermined amount of time to see that someone launching Paint.NET or their NVIDIA Control Panel gets a formatted hard drive instead.

 
“When the looting begins remember to consider the weight/value ratio. Here we have a few examples of high value, low effort.” http://www.safenow.org

I don’t know about you, but I’d probably just go with #1 or #3. I have all the data I need already, thankyouverymuch. No need to take a graduate course in compilers in order to do the job via #4.

Everything being done in #4 is possible for someone with administrator privilege. They’re only doing what they already have access to do. However, if a non-administrator can do this, then it’s an elevation of privilege issue. If it’s trivial to trick or mislead an administrator into doing it, then it could be called an “admin attack”. But all this is a discussion for another time.

So in conclusion, I wouldn’t be worried about this. The moment you see something about an attack or exploit requiring administrator privilege, or in some cases even just physical access, feel free to relax. (After all, if you have physical access to the computer, just hit the reset button and install Linux, right?)

* Disclaimer … Note that this is a slightly cynical post, and it’s by no means comprehensive.

Change of plans – here comes Paint.NET v3.5

The features that I want to implement for Paint.NET v4 are easily going to take another 6+ months to finish. However, I really want to get the improvements I’ve already made into the hands of users (that’s you!). I’d also like to get everyone updated to a newer version of .NET (right now Paint.NET v3.36 only requires .NET 2.0). If I wait another “6+” months, then it will be almost time for .NET 4.0 and I don’t want to deal with two big .NET upgrades in the same short period of time — or worse, face the indecision of “release now or in another 6 months after the new .NET is out…”.

After some discussion and debate with some forum members and moderators, I decided that I would go ahead and release the work I’ve done so far on Paint.NET v4 as Paint.NET v3.5. This would entail wrapping up all the current loose ends (fixing “new” bugs), finishing the last few work items, getting translation done, and releasing a few betas.

So here’s what to expect for Paint.NET v3.5:

  • Now uses, and requires, .NET Framework 3.5 SP1 (This also means that plugins can use .NET 3.5 SP1 features!)
  • New effect: “Surface Blur”, by Ed Harvey. It’s another good tool for noise reduction.
  • New effect: “Dents”, by Ed Harvey.
  • New effect: “Crystalize”, by Ed Harvey.
  • New file type support: HD Photo (or whatever the latest name for it is)
  • The auto-updater now lets you choose to have an update downloaded in the background and then installed once you exit Paint.NET. (A lot of people are going to like this feature!)
  • Moved “Language” and “Check for Updates” to the new Utilities menu
  • Reduced memory usage, especially when multiple images are open.
  • Improved rendering quality when zoomed in.
  • Greatly improved performance when opening and closing images.
  • Improved the installer UI by removing the “popup” progress windows.
  • “Optimizing performance” section of installer now gives actual progress instead of using the ambiguous “marquee” mode.
  • Installation is much simpler if the .NET Framework isn’t installed yet, or if it needs to be updated.
  • A CPU with SSE support is now required, such as an Intel Pentium III, or AMD Athlon XP, or newer.
  • Many miscellaneous bug fixes, as usual.

This is actually a fairly significant update to Paint.NET, although most of the changes are “under the hood.” Getting this released sooner will help make sure that when Paint.NET v4 does roll around that more of the new technology has been shaken free of bugs. The system requirements will be the same as what I posted last week for Paint.NET v4.

Paint.NET version 4.0 system requirements

The system requirements for Paint.NET version 4.0 will be increased slightly, although it shouldn’t affect many people.

Here is what version 3.36 requires:

  • Windows XP (SP2 or later), or Windows Vista, or Windows Server (2003 SP1 or later)
  • .NET Framework 2.0 (recommended: .NET Framework 3.5 SP1)
  • 500 MHz processor (recommended: 800 MHz or faster)
  • 256 MB of RAM (recommended: 512 MB or more)
  • 1024 x 768 screen resolution
  • 200+ MB hard drive space
  • 64-bit support requires a 64-bit CPU that is running a 64-bit version of Windows, and an additional 128 MB of RAM

And here’s what I’m planning for version 4.0:

  • Windows XP (SP2 or later), or Windows Vista, or Windows Server (2003 SP1 or later)
  • .NET Framework 3.5 SP1
  • Intel Pentium III, or AMD Athlon XP, or any newer CPU with SSE support (recommended: any dual-core CPU)
  • 256MB of RAM in Windows XP (recommended: 512MB or more)
  • 768MB of RAM in Windows Vista (recommended: 1GB or more)
  • 1024 x 768 screen resolution (recommended: 1280×1024 or larger)
  • 200+ MB hard drive space
  • 64-bit mode requires an additional 256MB of RAM, a 64-bit CPU, and a 64-bit edition of Windows

The biggest changes are the .NET 3.5 SP1 and SSE requirements. Requiring SSE simplifies a few things with the native code, and makes things a lot faster as well (especially for DDS file saving). Since the Pentium III is 9 years old, and the Athlon XP is 7 years old, I figured it was safe to do this. All 64-bit processors support SSE2, and so this is made use of then. It’s rather interesting to have the C++ compiler output the .asm files for GPC and to see how much SSE2 is part of the instruction mix (quite a lot!).

I’m not requiring any newer service pack levels, such as XP SP3 or Vista SP1. I don’t really see any need to. This probably won’t change until .NET itself requires something newer.

I’m not finding that I need to increase the memory requirement at all. In fact, technically the amount of required memory may go down with the changes I’m making to the rendering system. Less memory is always a good thing πŸ™‚

So, let me know if you think any of this will be a problem for your deployment or installation. Also, bear in mind that the only “hard” requirements are XP SP2, .NET 3.5 SP1, and SSE support. By “hard” I mean they are the only ones I actually enforce in the installer and at application startup.

Paint.NET and Performance — Thumbnails

One thing I’ve always had fun with in client development is performance. Paint.NET is quite heavily optimized for a short startup time, as well as for multicore for various rendering kernels (and for the effect system in general). So when I see a comment like this over on BetaNews

Reviewer: Galifray
It’s a good, strong program, but it has some flaws that have not been fixed. On slower systems, like mine, it can take ten or more seconds to open images, regardless of the image’s size. The program simply hangs with a busy cursor until it’s finally ready. This is a real annoyance when I’m attempting to open several images at once, or I’ve pasted content into a new image.

… it saddens me a little, and I immediately want to fix it πŸ™‚ I personally hate it whenever a program has a busy cursor for no reason that I can discern. In fact, I know exactly what’s causing this. When Paint.NET loads an image, it immediately generates two thumbnails. The first goes into the File->Open Recent menu (I still don’t know why no other imaging application does this! CS3 just dumps a list of files! update: apparently GIMP does this!). The second goes into the image thumbnail list in the the top-right of the window, and is something that is continually updated as you work on or change the image.

The problem is that Paint.NET is waiting for both of these thumbnails to be created before letting go of the “busy” cursor and allowing you to actually do anything. It isn’t something you’ll really notice on a dual- or quad-core system, and since I haven’t had a single-core system in 5 years I’ve never put much thought into it. Plus, the desktop market is increasingly not single-core (even $50 Celerons are dual-core now), so part of me has dismissed it as a problem akin to dial-up versus broad-band: over time, it just fixes itself.

I decided to fix this anyway, as it would make the application faster and more responsive for all systems, as well allow me to brush up on some asynchronous programming in a relatively safe area of the code. In order to fix the problem, I first needed to recreate and empathize with the situation. My development box has quad-cores* and 8 GB, which basically means I’m on a totally different planet than most users. I scrounged up some old computer parts and, voilΓ , I now have a good low-end bread-boarded system for performance testing:

It’s a Pentium 4 at 2.26 GHz with 2 GB of DDR400 memory running in single channel DDR266 mode. The board is an Intel 865 “PERL” and supports dual channel DDR400, but I specifically wanted the lower performance of single channel mode. I also have a 2.8 GHz Pentium 4 chip with HyperThreading so I can test how that extra hardware thread affects things. (For this scenario, it actually makes a huge difference!). I installed Windows XP SP2, and then Paint.NET v3.36, and opened up a bunch of images: sure enough, the performance sucked. I now have the empathy I’d been seeking.

Anyway, back to the problem at hand. There are at least two more places where thumbnail synchronization happens in Paint.NET v3.xx. The first is when you open the “image list menu” (press Ctrl+Q): it will wait until all thumbnails are up to date before showing you the menu.

The second is when you close a single image that has unsaved changes. If the thumbnail isn’t up to date, this dialog will not show until it is.

The wait to bring these up can be significant in certain pathological scenarios, involving very large images, slower systems, or priority inversion (the thumbnail generator thread runs at low priority). Contrast to the multiple image version of the Unsaved Changes dialog, which doesn’t wait for thumbnails and adds them to the dialog as they finish rendering. The dialog comes up immediately even if you have 100 images open that all have unsaved changes.

I want and need Paint.NET v4 to be responsive at all times. The response time of the application should not be [linearly] proportional to the size of the data being manipulated, or to the latency of retrieving/computing that data. The example I always give to help explain this is Google Maps / Live Search Maps. When you scroll around, it doesn’t jitter around while it downloads any specific tile. Instead, it gives a generic tile that effectively signals to the user that it is still being downloaded. Client applications should also strive to this level of responsiveness. To accomplish this requires a lot of asynchronous programming, and this was a good place to get started.

So, in the current Paint.NET v4 codebase, I’ve made the following changes:

1) For the Open Recent thumbnail, it is offloaded to a background thread. Since it’s possible for you to open up this menu before the thumbnail is done, it currently just has a “blank” thumbnail if you’re that quick. This has created another race condition that I plan to deal with later. For example, if you’re quick you can scribble on the image you just loaded and that scribbling will then show up in the Open Recent menu. Woops πŸ™‚ This will be fixable once I make changes to the read/write model employed by the data layer.

2) For the image thumbnail list in the top right, I’m removing all code that “synchronizes” on it. There is a little “busy” animation for when the thumbnail is being generated for the first time (thank you Tango).

3) For the image list menu (Ctrl+Q), if a thumbnail isn’t available yet then it will just show a blank thumbnail.

4) For the unsaved changes dialog, it will use the same trick as the image thumbnail list: a “busy” animation while the thumbnail is being generated. It will never block on this, so if you are faster than the thumbnail rendering then you will save a  few precious seconds. This required adding support to my Task Dialog so that it could support an animation and not just a static image.

The Layers window does not yet have the animation for a thumbnail that isn’t ready yet.  I’m planning to do significant work in that area later, and so I’ve saved this work for then. And I definitely need to write some classes so that I can support animations in the UI a lot better. Right now each occurrence is manually setting up timers, etc.

The result? In Paint.NET v3.36 on that 2.26 GHz Pentium 4, it takes 28 seconds to load a batch of eight 7 megapixel images. There are also periods of time where it appears to “stall” for no good reason. Paint.NET v4 does it in 16 seconds, and has none of the obnoxious stalls. You just end up with several thumbnails in the top-right window that show up as “busy” animations until the thumbnails have finished rendering, during which time you’re free to do whatever you want. As a bonus, performance is also noticeably faster on my quad-core development box.

My ad-hoc Pentium 4 box is proving to be quite useful for performance testing. Over the last few days, Paint.NET v4 has significantly improved in performance. I’ve so far rewritten the front-end of the rendering engine, and had to come up with some interesting tricks to keep the performance good. The first revisions of this code were plenty functional, but very naive from a performance standpoint.

* Until Nehalem comes out, that is. Then it’s time for a dual Xeon box. Sixteen threads!