TweetFollow Us on Twitter

Photoshop Plug-Ins Part 2

Volume Number: 15 (1999)
Issue Number: 5
Column Tag: Programming Techniques

Writing a Photoshop Plug-In, Part 2

by Kas Thomas

Learn to harness the awesome power of Adobe's 800-lb graphics gorilla

Introduction

Plug-ins are among the best things ever to have happened to desktop graphics. They're small, powerful, versatile additions to the graphic artist's arsenal; and for programmers, they allow easy entry into the complex world of image processing, relieving the developer from having to worry about file I/O, event loops, memory management, blit routines, color modes and gamuts, etc. As a result, more time can be spent on image-processing issues and less time need be devoted to application infrastructure issues. It's much easier to test a new idea by implementing it as a plug-in than by trying to splice new (and possibly buggy) code into a standalone application.

Last month, in Part 1 of this article, we laid the groundwork for writing plug-ins for Adobe Photoshop, the bestselling graphics powerhouse. We talked about the Photoshop plug-in API, the nine types of plug-ins Photoshop currently supports, 'PiPL' resources, how the plug-in communicates with the host, best ways to "chunk" an image, and how image data is organized. We also talked about error handling, callback services, memory issues, and debugging tips. But we still only had time to hit the high points, without discussing much actual code.

In this installment, we'll actually run through some real, live plug-in code (finally!), showing how to drive the host through prefetch buffering, how to process large images in chunks (padded on the edges if need be), and how to get realtime previews to appear in your user dialog. Along the way, we'll say a word or two about area operators, gamma and bias, histogram flattening, and sundry other graphics-programming concepts. We've got a huge amount to cover in a limited space, so let's get right to it.

Rapid-Fire Review

In case you weren't with us last month (or maybe you were, but you've since forgotten everything), here's a capsule review; see if it makes sense.

Image-filter plug-ins for Photoshop are compiled as shared library modules (filetype '8BFM', creator '8BIM'), with an entry point of main(), which is typed as pascal void and must be the first function in the module. Since we're compiling PPC-native (for Photoshop 5.0), we can use globals and static data with impunity. The resource fork of the plug-in should contain a 'PiPL' resource (which ResEdit isn't much help with, incidentally) conforming to Adobe's SDK guidelines <http://www.adobe.com>.

At runtime, the host (which we'll assume is Photoshop, although it could very well be another program, such as After Effects) calls the plug-in's entry point with a selector value to indicate the phase of operation that the plug-in is in. There are six possible selector values, corresponding to About (time to pop the About dialog), Parameters (allocate globals), Prepare (pop a user dialog if need be; otherwise validate your global values), Start (initiate processing), Continue (continue processing), and Finish (cleanup). In the example plug-in for this article, we have an empty Finish handler, since there are no cleanups to do, and the Continue handler (while non-empty) isn't really needed. If the host has a callback named AdvanceState(), we don't need a Continue phase, because we can drive the buffer prefetch process all the way to completion in Start. Every version of Photoshop since 3.0 has AdvanceState(). But some host programs (After Effects 3.1, for example) do not support this function. For those programs, you need a Continue-based polling loop in order to retrieve data.

The host communicates with the plug-in by means of a gigantic (200-field) data structure called the FilterRecord, a pointer to which is included in each call to main(). Some of the fields of this structure contain pointers to host functions (callbacks), or suites of functions. The structure is too huge to list here (although key fields were listed in last month's article). We will discuss important fields, and their use, as they come up.

The plug-in communicates with the host in two ways: via callback functions, and by setting parameter values (in the FilterRecord) during the Start phase. The field values set by the plug-in at Start will be inspected by the host prior to data buffering operations. This is how the host "knows" what size buffers to allocate for input and output, which channels of data to fetch for the plug-in, etc.

Two callbacks that are worth noting are the TestAbortProc() and UpdateProgressProc(). The former looks for user cancellation events and spins the watch cursor dial. The latter displays a progress bar to the user, automatically suppressing it for short operations. These should be called frequently during plug-in execution (i.e., in the main loop, not the user dialog).

The plug-in needn't have any event-loop code (unless you want to write a dialog filter) and in most cases you won't have any need for Quickdraw, since everything you need to know is available either from the FilterRecord structure or via host callbacks. Undo actions are done for you by the host. You don't have to worry about allocating input and output buffers; the host will point you to them. You also don't have to do masking, because the host will automatically mask your operation to the user's lasso area.

Image Filter Example

In articles of this sort, example code tends to be rather trivial and cursory, forgoing sophistication in favor of clarity, which of course is not only boring but unhelpful. So for this article, rather than follow tradition I tried to produce an example plug-in that might actually be useful. The filter that resulted implements a subtle edge-detection method which, combined with histogram flattening and a few other tricks, gives some visually interesting results (see Figure 1). As a tribute to the pharmacologically active beverages used in the development of this plug-in, the plug-in was named Latté.


Figure 1. Lena before Latté (left) and after. The version on the right was created by Latté in Sketch mode with a pixel radius of 2.75 and histogram equalization enabled.

The CodeWarrior project for Latté, available online at <ftp://www.mactech.com>, is composed of four code modules (written in Metrowerks C): two primary files containing the handler code and user interface, and two utility files. Latté.c and LattéUserDialog.c comprise about 1,500 lines of code (total). The two utility modules comprise another 5,500 lines of rather pedestrian code involving string conversions and such. The real action is in Latté.c, where our handlers and image-modification code reside. Listing 1 shows the main() function. Let's quickly go over it.

Listing 1: main()

main()
This is the entrypoint for our plug-in, in Latté.c.

pascal void main (const short selector,
                    FilterRecord *filterParamBlock,
                    long *data,
                    short *result)
{
   // Declare a handler dispatch table
   // (an array of function pointers)
   
   static const void (*handlerFunc[])() =
   {
         NULL,         // filterSelectorAbout (handled in main)
         DoParameters, // filterSelectorParameters 
         DoPrepare,    // filterSelectorPrepare
         DoStart,      // filterSelectorStart
         DoContinue,   // filterSelectorContinue
         DoFinish      // filterSelectorFinish
   };
   
   GPtr globals = NULL;       // actual globals

   //   Check for about box request.
   if (selector == filterSelectorAbout)
   {
      DoAbout();
      *result = noErr;
      return;
   }
      
   // Get globals the old-fashioned way.
   globals = AllocateOurGlobals (result,
                         filterParamBlock,
                         filterParamBlock->handleProcs,
                         sizeof(Globals),
                          data,
                          InitGlobals);
   
   if (globals == NULL)
   {    
       // Fortunately, everything's already been cleaned up,
       // so all we have to do is report an error.
    
    *result = memFullErr;
    return;
   }
   
   //------------------------------------
   //   Dispatch to the appropriate handler function.
   //------------------------------------
   if (selector > filterSelectorAbout && 
      selector <= filterSelectorFinish)
         (handlerFunc[selector])(globals); // dispatch via jump table
   else
      gResult = kFilterBadParameters;

   // unlock handle pointing to parameter block and data
   // so it can move if memory gets shuffled. (Not needed
   // if you declare globals normally.)
   if ((Handle)*data != NULL)
      PIUnlockHandle((Handle)*data);
   
} // end main

The first thing to note is that we declare a function table to hold pointers to our handlers; this avoids turning our main() function into one big, long, ugly "case" switch.

We check for the About selector right away and handle it as a trivial case, rather than dispatching it out, because when the host calls us with the About selector, the filterParamBlock is not valid and there's no sense going through the rest of main().

Before dispatching to a handler, we should set up our globals, because each handler expects a pointer to the globals. The AllocateOurGlobals() function (complete code online) allocates globals if they haven't yet been allocated. The pointer returned by that function points at a custom record structure that looks like:

typedef struct Globals
{    
   short   *result;            // This is reported to host.
   FilterRecord   *filterParamBlock;   // FilterRecord
   Rect      proxyRect;         // used in our user interface code

} Globals, *GPtr, **GHdl;

Each handler stuffs its return code into the result field of this record, which points to the address passed in the first argument to main().If a negative number is stuffed, the host will display an appropriate error message for us. If a positive error code is passed, the host does nothing, because it expects us to pop an alert. A normal result is noErr.

The AllocateOurGlobals() function sets up a pointer to our globals the old-fashioned way (as it had to be done in the pre-PowerMac era) by stuffing a Handle value in data. This was the mechanism Adobe came up with for globals back in the bad old days when the MC680x0's A5 register was the key to globals. If you compile PPC-native, there is no longer any need to set up globals the old way; you can just declare them normally. If you do it the old way, the data Handle is locked inside AllocateOurGlobals() and has to be unlocked at the end of main().We call one of our own utilities, PIUnlockHandle(), in this instance because it unravels the lengthy chain of indirections needed to get at the host's memory routines. If we didn't do this, the last line of code in main() would look like:

(*((globals->filterParamBlock)-> \
   handleProcs)->unlockProc)((Handle)*data);

which even Kernighan and Ritchie would find repellant. Even so, such an expression repays close study, because it shows how the unlockProc (in Photoshop's Handle Suite of callbacks) is accessed. Throughout our code, we use functions like HostLockHandle() that eventually call on the host's Handle Suite, which is a suite of callbacks designed to let the host implement memory routines in a platform-appropriate manner. We avoid using MacOS calls, not only for device independence but because Photoshop implements a much more efficient memory management scheme, internally, than the MacOS does.

So far, we've allocated globals but we haven't initialized them. Initialization is done in the Parameter phase of operation. When main() is called with a selector of filterSelectorParameters, we vector to our DoParameters() handler function. Listing 2 shows the handler as well as the ValidateParameters() function. The essential thing to remember is that if this handler is called, it means the plug-in has just been invoked for the first time. That means the user needs to see a dialog. But first, the dialog's default parameters have to be set. These constitute additional globals, which are attached to the parameters field of the FilterRecord. (Remember, a pointer to this enormous structure was given to us as a parameter to main.) For Latté, we define a user-prefs struct, the TParameters block, as follows:

// This is our user-dialog params record. Persistent across plug-in calls.
typedef struct TParameters
{ 
   double    userRadius;
   Boolean    queryForParameters;
   short       rowSkip;
   long       userMode;
   Boolean    useEqualizedHistogram;
   Boolean    useAdvance;
   double    histogram[256];
   long       imagesize;
   double    blendFactor;
   
} TParameters, *PParameters, **HParameters;

Ordinarily, to get at the fields of our TParameters record we would need to do a lot of indirection, such as:

((TParameters *)*(globals->filterParamBlock)-> \
      parameters)->userRadius = 4.0;

which is hard to read and easy to mess up. So to simplify access to these fields, we rely on certain macros and definitions (given in Latté.h) - see Listing 3 - which lets us just say gRadius = 4.0.

The macros in Listing 3 are worth scrutinizing since they tend to be used liberally throughout our plug-in code (as well as Adobe's own example code). Without them, life would be a lot harder - maybe not worth living. (All right, that might be overstating it. But you get my point.)

Listing 2: Parameter Handler

DoParameters()

This is our Parameter handler function. If this function is called, it means the plug-in has been invoked for the first time.

void DoParameters (GPtr globals)
{
   ValidateParameters (globals);
   
   gQueryForParameters = TRUE; // meaning, we need to pop the dialog
}

ValidateParameters()

Here is where the parameters for theuser dialog get initialized. The relevant data structure gets attached to the parameters field of the FilterRecord, which was passed in the second argument to main().

void ValidateParameters (GPtr globals)
{
   if (gStuff->parameters == NULL)
   {
      // attach to FilterRecord's parameters field:
      gStuff->parameters = 
         PINewHandle ((long) sizeof (TParameters));
      
      if (gStuff->parameters == NULL)
      { 
         gResult = memFullErr;
         return;
      }

      gRadius = kDRadiusDefault;
      gOperatingMode = dialogOperatingModeSketch;
      gUseEqualizedHistogram = false;
      gUseAdvance = false;
      gRowSkip = 1;
      gBlendFactor = kDBlendDefault;
      gResult = noErr;
      return;

   } // parameters
}

Listing 3: Indirection Macros

Indirection macros

// there are only 3 fields in our Globals struct, so:
#define gResult          (*(globals->result))
#define gStuff          (globals->filterParamBlock)
#define gProxyRect      (globals->proxyRect)

// the following reflect TParameters fields:
#define gParams                ((PParameters) *gStuff->parameters)
#define gRadius                  (gParams->userRadius)
#define gQueryForParameters   (gParams->queryForParameters)
#define gOperatingMode       (gParams->userMode)
#define gUseAdvance            (gParams->useAdvance)
#define gRowSkip               (gParams->rowSkip)
#define gBlendFactor         (gParams->blendFactor)
#define gUseEqualizedHistogram    \
                                    (gParams->useEqualizedHistogram)

Prepare Handler

Our DoParameters() function in Listing 2 sets a flag to tell us to pop the user dialog in the Start phase. But before we get there, the host will first call us with a Prepare message, at which point we vector to our Prepare handler, Listing 4.

When our Prepare handler is called, we can be confident that the FilterRecord's fields have been filled out by the host, which means we can easily find out almost everything we might want to know about the image in terms of resolution, total size, dimensions, number of available planes (channels), image mode (grayscale, RGB, or whatever), and so on. One of the most important fields in the FilterRecord is the filterRect, which is a Rect giving the raw bounds of the image (or selection, as the case may be), in pixels. (If it's a selection, filterRect.left won't necessarily be zero.) Our job in the Prepare phase is to figure out how much data we can safely process at a time. We can do this by taking filterRect.right - filterRect.left (the number of pixels in one raster line), times gStuff->planes (the number of 8-bit bytes per pixel, which might be considerable since Photoshop allows up to 24 channels per image), plus one extra plane as the mask plane; multiplied by two (because we'll need an input buffer and output buffer); and we find out how many times that number will go into gStuff->maxSpace, which is the maximum amount of space available. (It's one of those FilterRecord fields that Photoshop fills out for us.) We keep the result in a global, gRowSkip, which represents the number of raster lines of data we can ask for at once without causing the host to go into scratch-disk double buffering.

The constants XMARGIN and YMARGIN in Listing 4 are used so that we get buffers with "overhang" on all edges. Photoshop will courteously pad the margin areas with any pixel value we want, or - better yet - with edge replication. (See Listing 5, below.) We have set XMARGIN and YMARGIN to 4 in Latté.h so that we can safely use a 9x9 convolution matrix on each pixel, thus obviating the need for ugly special-case code for edge conditions.

We actually request all available planes in our code, even though Latté processes just the color channels. If you really want to economize on the use of buffer memory, or if you don't like dealing with interleaved data, you can request one plane of data at a time. The colors start with plane zero and end with plane three for CMYK or plane two for RGB, etc.; then you get into layer masks and alpha channels. (See Adobe's SDK docs or last month's article.)

Note that if you want access to the selection mask for a selection, you request (and get) it in a separate buffer. The request is made by setting the bounds of the maskRect field of the FilterRecord. (Zero these out if you don't want the mask data.) The 8-bit mask data will come back in (what else?) the maskData field of the FilterRecord. Of course, none of this matters if your plug-in has been called with the entire image selected. To determine if this is the case, inspect the haveMask field (a Boolean) in the FilterRecord at Start.

Get clear in your mind the fact that an image may have mask planes, but your plug-in isn't necessarily always going to be processing the whole image; often it'll be put to work on a selection area. The mask data for that selection area is different from the channel mask planes. Photoshop will mask your effect for you by default. But if you need access to the mask data in order to implement, say, some kind of matte-defringing effect, you can request it.

Listing 4: Prepare Handler

DoPrepare()

Prepare to filter an image. If the plug-in filter needs a large amount of buffer memory, this routine should set the bufferSpace field of the FilterRecord to the number of bytes required.

void DoPrepare (GPtr globals)
{
   short    totalLines, rowWidth = 0;
   long      oneRow = 0;
   long      inOutRow = 0;
   long      inOutAndMask = 0;
   
   gStuff->bufferSpace = 0;

   // Check maxSpace to determine if we can process more than a row at a time
   
   ValidateParameters (globals);
   
   totalLines = 
      gStuff->filterRect.bottom - gStuff->filterRect.top;
   
   rowWidth = 2 * XMARGIN + 
      gStuff->filterRect.right - gStuff->filterRect.left;
   
   // Try to calculate how much memory will be needed for a
   // single chunk of image.
   // Start by calculating one row of data and its planes
   oneRow = rowWidth * (gStuff->planes);

   inOutRow = oneRow * 2; // inData, outData

   inOutAndMask = inOutRow + rowWidth; // maskData is one 8-bit plane
   

   // Now calculate what we'll need
   while ((  ((inOutAndMask * gRowSkip) + 
      YMARGIN*2*rowWidth) < gStuff->maxSpace) && 
      (gRowSkip < total))
         
         gRowSkip++;
   
   gStuff->maxSpace = gRowSkip * inOutAndMask;

}

Start Handler

The Start handler, Listing 5, is where all the action happens. After validating our params, we look to see if the host supports the callbacks necessary to show a preview in our dialog; then we inform the host that we want buffers with padding on all sides; then we pop the user dialog (if needed) and proceed to the appropriate processing loop.

If the host supports the AdvanceState() callback - as every version of Photoshop from 3.0 on does - there is no need to have a Continue handler. We've nevertheless retained a function, StartNoAdvanceState(), that does all the setups for Continue-loop polling, in case our plug-in gets called by a host that doesn't support AdvanceState(). Space doesn't permit a listing of that (optional) code here, but the complete project is available online <ftp://www.mactech.com>.

Our filter will do pixel calculations that involve adjoining vertical and horizontal pixels (up to a radius of four pixels away from the center pixel), so we need to request edge padding for our buffers. We do this by stuffing a non-zero value in the inputPadding and outputPadding fields of the FilterRecord. (The Adobe-defined constant plugInWantsEdgeReplication will ensure that we get edge-padded buffers.) When we do this, AdvanceState() will not balk if we request pixels outside the image or selection area. Normally, AdvanceState() will throw a -30100 error if we ask for buffers that exceed the bounds of the image.

Note: It turns out that Adobe's sample code for the Dissolve example filter (in the SDK) contains a bug wherein padding is specified in one instance but not another, and an error sometimes occurs when the filter is re-invoked with Command-F. This bug has been in all five versions of the SDK going back to 1993. It can be fixed by checking for out-of-bounds requests involving gRowSkip.

The second function in Listing 5, StartWithAdvanceState(), drives the host through the buffer prefetch process. The way this is done is by setting the bounds of gStuff->inRect and gStuff->outRect (which are fields in the FilterRecord) to some subset of gStuff->filterRect. The filterRect field represents the overall bounds of the image (or selection) and we don't alter that Rect.

Listing 5: Start Handler

DoStart()

Validate our parameters, find out if AdvanceStateProc is available with this host, enable buffer padding, then pop the user dialog if necessary and dispatch our main processing loop.

void DoStart (GPtr globals)
{      
   ValidateParameters (globals);
   
   // We need the following callbacks in order to make a proxy
   // in our user dialog. 
   gUseAdvance = AdvanceStateAvailable () &&
             DisplayPixelsAvailable ();      
   
   // request edge padding
   gStuff->inputPadding = plugInWantsEdgeReplication;
   gStuff->outputPadding = gStuff->inputPadding;
   gStuff->maskPadding = gStuff->inputPadding;
   
   if (gQueryForParameters)
   {
      DoUI (globals);            // Show the user dialog
      gQueryForParameters = FALSE; // reset flag
   }

   if (gResult != noErr) // inform host of errors
      return;         
   
   if (gUseAdvance) // do all processing now
      StartWithAdvanceState(globals);
   else                     // do all processing in Continue
      StartNoAdvanceState(globals);
}
StartWithAdvanceState()

This is where we poll the host for image data, using AdvanceState(). Each call to AdvanceState() sends the FilterRecord back to the host with our request for more image data. If AdvanceState() returns normally, our buffers are filled and we can proceed to the pixel-processing function(s). If an error occurs, it's essential that it be reported immediately to the host in gResult.

void StartWithAdvanceState (GPtr globals)
{
   long i;
      
   SetUpInitialRect(globals);
   
   // First, we loop over all the image data in order to build a
   // histogram table:   

   do {                     

      gResult = AdvanceState ();
      if (gResult != noErr)
         goto done;
      
      TallyHistogram (globals);
      
      if (gResult != noErr)
         goto done;
      }
   while (DoNextRect (globals));   
   
   EqualizeHistogram(globals);   


   // set up first requested area...
   SetUpInitialRect(globals);
   
   // then we cycle thru rest of image, processing it
   do {
      gResult = AdvanceState ();
      if (gResult != noErr)
         goto done;
      
      DoFilterRect (globals, true);
      if (gResult != noErr)
         goto done;
      }
   while (DoNextRect (globals));

   done:
               // Now tell the host we're done by setting these Rects to zero
   PISetRect (&gStuff->inRect, 0, 0, 0, 0);
   PISetRect (&gStuff->outRect, 0, 0, 0, 0);
   PISetRect (&gStuff->maskRect, 0, 0, 0, 0);

}

Be clear on the fact that we don't need to allocate our own buffers; simply setting up inRect and outRect tells Photoshop the size buffers we need. When AdvanceState() returns, the inData and outData fields of the FilterRecord point to our (filled) buffers. We're then free to loop over the inData and write our processed pixels to outData.

The code to get our first chunk of image data, occurring in SetUpInitialRect(), looks like this:

   // We must tell the host how many channels' worth of data we want. 
   // ( gStuff->planes tells us what the maximum is for this image.)
   // If we just want R,G, and B, we need planes 0, 1, and 2.

   gStuff->inLoPlane = gStuff->outLoPlane = 0;
   gStuff->inHiPlane = gStuff->outHiPlane = 
      gStuff->planes - 1;
   
   // Now we set up our requested areas:
   gStuff->inRect = gStuff->filterRect;
   gStuff->inRect.bottom = gStuff->inRect.top + gRowSkip;
   InsetRect( &gStuff->inRect, -XMARGIN, -YMARGIN );
   
   // enforce bounds!
   if (gStuff->inRect.bottom > gStuff->filterRect.bottom)
      gStuff->inRect.bottom = 
         gStuff->filterRect.bottom + YMARGIN;
      
   // Now simply copy the input Rect bounds to the other Rects:
   gStuff->outRect = gStuff->maskRect = gStuff->inRect;

   // Now call AdvanceState() to tell host to hand us our filled buffers.

As you can see, AdvanceState() is perhaps an unfortunate name for a function whose main job - from the plug-in programmer's standpoint - is to prefetch data. A better name might have been GetMoreImageDataNow().

Before we show code for the main processing loop, let's talk about users dialogs for a moment, then explain the workings of our main processing algorithm, then go to the pixel looping code.

The User Dialog

The user interface dialog for a plug-in can be as simple or as Kai-Krause as you want. Adobe's SDK comes with a lot of good utility code for setting up dialogs, including code for creating realtime-updating preview panes (which Adobe calls proxies). The full source is too lengthy to reproduce here, but we can quickly summarize the proxy-creation process.

As you might expect, the proxy pane consists of a custom UserItem in the 'DITL' resource for the dialog. To make the pane update automatically, you attach a userProc to it via the Dialog Manager's SetDialogItem() call - a standard Mac dialog trick.

The subsampling factor for making the image fit the pane can be determined by doing integer-divides of the filterRect dimensions by the pane dimensions. A subsample factor of 4 means the proxy Rect is a quarter the height and width of the filterRect. If you set the inputRate and maskRate fields of the FilterRecord to the appropriate subsampling value, the host will subsample the image data for you on all subsequent calls to AdvanceState(). At present, only integral values for inputRate will work; fractional values will be supported in the future. (Note: Don't forget to reset the sample rate to 1.0 after the dialog returns, or else all subsequent plug-in operations will yield a postage-stamp-sized image!)

The userProc that does the drawing should simply size and center a Rect within the proxy pane, fill out a PSPixelMap data structure (see PIGeneral.h), then call the DisplayPixelsProc() callback to make the host draw into the pane using the PSPixelMap. The host will do the appropriate color space conversion and copy the results to the screen with dithering. Meanwhile, you haven't fussed with GWorlds, CopyBits(), or any Quickdraw calls whatsoever.

Naturally, each time the user tweaks a control or button in the dialog, you should do four things: update the control, cache the associated parameter value, call AdvanceState() and your processing function, then call InvalItem() on the proxy-pane item. And that's all there is to it.

Complete code is in the Latté project, or in Adobe's SDK.

Omnidirectional Edge Detection

Before we go to the main processing loop, let's talk about edge detection for a moment, since that's mainly what our plug-in does. (Latté implements blurring, edge-sketching, and embossing, but all three features rely on one core algorithm.)

Edge detection, by its very nature, involves a differencing operation of one sort or another. In essence, the goal is to identify areas of rapid brightness change. Simply taking the difference between neighboring pixels is a good way of finding edges in one direction (for example, edges with a north-south alignment). The trick is to find a way to reveal edges in their own natural directions, whatever those may be.

One way of doing this is to expand the dark parts of the image by blurring or diffusion, then subtract the original image from the blurred image, leaving just the difference. This is a variation of the familiar Unsharp Mask effect (a standard filter in Photoshop's Sharpen submenu). The problem with this technique is that it leaves annoying halos.

Another more-or-less standard trick is to apply a Laplacian convolution matrix (or area operator) to the image. The matrix just holds multiplier values for all the pixels covered by the matrix grid. The idea is to multiply underlying pixels by the appropriate coefficient, then sum everything and write the sum to the central pixel. (This operation is not done in place, but is written to a separate output buffer.) For example, consider the 3x3 matrix in Figure 2:


Figure 2. A 3x3 edge-detection matrix.

To convolve an image with the matrix of Figure 2, we simply loop over all the pixels, and for each pixel, do:

pixel[middle] *= 12;
pixel[middle] += pixel[upperleft] * -1;
pixel[middle] += pixel[above] * -2;
pixel[middle] += pixel[upperright] * -1;
pixel[middle] += pixel[left] * -2;
pixel[middle] += pixel[right] * -2;
pixel[middle] += pixel[lowerleft] * -1;
pixel[middle] += pixel[below] * -2;
pixel[middle] += pixel[lowerright] * -1;

outData = pixel[middle];

Photoshop, by the way, has a filter (look in Filter: Other: Custom) that implements a 5x5 convolution matrix with text-edit fields to let the user enter coefficients by hand. A little time spent playing with this filter will teach you a lot about convolution matrices.

The above type of matrix will find all the edges in an image, but it tends to bring out noise and gives stair-steppy outlines (poorly antialiased).

Latté solves these problems by first finding out which direction a given tile of pixels is biased toward, intensity-wise, then performing a differencing operation along that axis (using interpolated pixel values). Imagine if you were to apply the convolutions of Figures 3 and 4 to a pixel, caching the results for each operation in separate variables:


Figure 3.


Figure 4.

Applying the matrix in Figure 3 is tantamount to multiplying each pixel intensity by its 'x' coordinate. If you divide the final result by 255 (the maximum permissible luma value), you get two numbers (for 'x' and 'y' directions) whose ratio represents the tangent of the gradient angle. Since the gradient vector points at the tile's "center of gravity" in terms of pixel intensity, it is also (presumably) orthogonal to any edge that's present

In Latté, we implement this method as a 9x9 matrix operation, not only for better precision but to average out gradient noise over an 81-pixel area. (See Listing 6.) Note that we divide by 72 rather than 81 because in averaging x-axis data we neglect pixels that lie on the y-axis, and vice versa. One row of points can be discounted in each case.

Listing 6: GetGradientDirection()

GetGradientDirection()

Given a pointer to our pixel data and the byte amounts for horizontal and vertical offsets to pixel data, we calculate the x-y "intensity moments" over a 9x9 pixel grid, thereby arriving at the cosine and sine of the gradient direction, which we return in gradientCos and gradientSin.

#define mOffset(x,y) (x * jump + y * rowbytes)
#define mPixel(ptr,x,y) (*(ptr + mOffset(x,y))) 

void GetGradientDirection(    unsigned8 *pix, 
                     unsigned32 jump,
                     int32 rowbytes,
                     float *gradientCos,
                     float *gradientSin ) 
{
   int16   row,col,lim;
   float   momentx = 0., 
         momenty = 0., 
         pixvalue,
         divisor;
   
   lim = 4;         // sets up 9x9 matrix operation
   divisor = 1./(1.404 * 128. * 72.); // eliminates divides
   
   // loop thru matrix
   for (row = -lim; row <= lim; row++)
      for (col = -lim; col <= lim; col++)       
      {   
         pixvalue = (float)mPixel(pix,col,row);
         momentx += pixvalue * (float)col;
         momenty += pixvalue * (float)row;
      }
   
   momentx *= divisor; momenty *= divisor;
      
   done:
   *gradientCos = momentx; // cosine
   *gradientSin = momenty;    // sine
   
   return;
}

Once we know the luma gradient's direction, all that remains is to difference pixels across this gradient. But if the gradient can run across our (square) pixels at any angle, how do we know which brightness values to subtract? The answer is, we obtain "between-pixel" values by interpolation.

The situation is summarized in Figure 5. Given a grid of pixels with various values, and given that our goal is to detect edges, how are we to determine the (new) value of the center pixel? The answer is, first we find the sine and cosine of the gradient vector (as explained before), then we take the difference of any two points located on the gradient axis.


Figure 5. Differencing across a gradient. The problem is, given the 5x5 block of pixels shown on the left, and given that we want to detect edges, how do we calculate the value of the center pixel? The answer is, we find the position-weighted average of pixel values in 'x' and 'y' directions, to come up with the gradientvector (shown on the right). Then we take the difference of any two points (on opposite sides of the origin) that lie on the gradient axis.

The key to understanding what's going on here is to think of pixels not as square objects with adjoining sides, but as infinitely small points on a lattice. Each point gives the image intensity at that spot in the lattice. But don't think of the in-between areas as empty. Think of the entire lattice as a continuous 2D luma space for which individual pixels are merely spot samples.

But how do we calculate in-between pixel values? Given four equally spaced pixels and a sample point falling inside the square defined by them, the problem is how to determine the interior point's luma value by interpolation from the "corners" of the cell. The example in Figure 6 shows a point given by coordinates (0.67, 0.78) enclosed in a unit cell where the upper right corner is (1, 1). The pixel values of the corners are (clockwise from the origin) 100, 254, 74 and 16. The question is, what's the value at (0.67 0.78)?


Figure 6. What's the value at (0.67,0.78)?

The answer is simple. First, we interpolate between 100 at (0,0) and 16 at (1,0) to get the intermediate value of 44, which is calculated by straight linear interpolation (or lerping). We can use a macro for this:

#define mLerp(a,b,f) ( (1. - f) * a + f * b)

Thus, mLerp(100, 16, 0.67) gives 44 for the point at (0.67, 0). Likewise, we interpolate along the line from (0,1) to (1,1) to get mLerp(254, 74, 0.67) == 134 for the point at (0.67, 1). See Figure 7.


Figure 7. Interpolation along the edges of a cell.

All that remains is to interpolate along the line segment from (0.67, 0) to (0.67, 1) to get the value of our interior pixel, i.e., mLerp(44, 134, 0.78) == 114.

There's another way to do this: We can develop blending factors for the corner pixels very easily once we realize that the cell subdivision furthest from any given pixel is proportional in area to that pixel's contribution. For example, the contribution of pixel (0,1) to the interior point's brightness is given by 0.78 * 0.33 == 0.26. If we go around the corners one by one in this fashion, we get the blending factors shown in Table 1.

VertexBlend FactorPixel Value
(0,0)0.33 * 0.22 == 0.070.07 * 100 == 7
(0,1)0.33 * 0.78 == 0.260.26 * 254 == 66
(1,1)0.67 * 0.78 == 0.520.52 * 74 == 39
(1,0)0.67 * 0.22 == 0.150.15 * 16 == 2
TOTAL1.00114

Table 1. Barycentric coordinates for a point in a cell

Note that the blending factors - which are properly called barycentric coordinates - sum to unity (because we're working with a unit cell). To use this method with arbitrarily sized cells, simply normalize every coefficient by dividing by the total area of the cell.

Note also that if we calculate any three blending coefficients, the fourth drops out automatically by subtraction.

Action Radius

Knowing how to interpolate brightness at any arbitrary point between pixels gives us a great deal of power and flexibility, because now we can offer the user the option to set fractional pixel-radius values in a dialog (as Latté does). We can also do blurs and edge-retrievals at an action radius of less than one pixel.

In Latté, we fetch the user's radius as a double and multiply the radius times the gradient cosine and sine to get the floating point coordinates of a "phantom pixel" target. We also negate the coordinates to get the corresponding "mirror image" phantom pixel.

The rest is easy, because with brightness values for our interpolated pixels in hand, we can difference them to get edges, average them to achieve edge-orthogonal blurring, etc. To give better visual results, Latté actually operates on two pairs of (mirror-image) interpolated pixels, spaced equally apart from the central pixel - a four-tap filter. See Listing 7.

In "Sketch" mode (edge detection), Latté provides a nice alternative to Photoshop's own "Find Edges" filter. (See Figure 1.) Find Edges is quite a bit faster than Latté, but unlike Find Edges, Latté lets the user set the edge-detection radius to any floating-point value between 0.1 pixel and 4.0 pixels. Also, noise rejection and antialiasing are decidedly better with Latté.

Listing 7: TransformPixel()

TransformPixel()

Given the data ptr, appropriate horizontal and vertical offsets between pixels, a user-supplied radius value, a mode selector, and the gradient direction (sine and cosine values), transform the central pixel based on a 4-tap filter operation (differencing, averaging, or embossing, as per the user's mode choice).

// we need this in order to prevent rollover:
#define kAlmostOne 0.9999

unsigned8 TransformPixel(    unsigned8 *pix, 
                     unsigned32 jump,
                     int32 rowbytes,
                     double radius,
                     long mode,
                     float gradientCos,
                     float gradientSin ) {

   int16   i;
   float   difference = 0.,
         average = 0.,
          sketchValue, 
         step,
         multiplier;   
   
   for (   multiplier = -radius, 
         step = radius/2.,
         i = -4; 
         
         i <= 4;
          
         i+=2, 
         multiplier += step)
 
   {
      float    polarity;
      int32    x_lo,x_hi,y_lo,y_hi;
      double   dummy;
      float    fx, fy, pixvalue;
      float    cornerUL, cornerUR, cornerLL, cornerLR;
      double   scaled_x,scaled_y;
             
      // this is a 4-tap symmetric filter omitting the central pixel, so:
      if (!i) continue; 
            
      scaled_x = multiplier * gradientCos * kAlmostOne;
      scaled_y = multiplier * gradientSin * kAlmostOne;
      
      if (gradientCos == 0.) 
         x_hi = x_lo = 0;      
         
      else {
         x_lo = scaled_x; // this is an integer cast!
         
         x_hi = (scaled_x < 0.) ? 
            x_lo - 1 : x_lo + 1;    // integer cast!
         }   
         
      if (gradientSin == 0.) 
         y_hi = y_lo = 0;
      
      else {
         y_lo = scaled_y;
      
         y_hi = (scaled_y < 0.) ? 
            y_lo - 1 : y_lo + 1;
         }
      
      fx = scaled_x;       // x-coord of hit pt
      fy = scaled_y;      // y-coord of hit pt
      
      // once inside the cell, we'll need the decimal part only:
      fx = fabs(modf((double)fx,&dummy)); 
      fy = fabs(modf((double)fy,&dummy)); 
   
      cornerUL = (float)mPixel(pix,x_lo,y_hi);
      cornerUR = (float)mPixel(pix,x_hi,y_hi);
      cornerLL = (float)mPixel(pix,x_lo,y_lo);
      cornerLR = (float)mPixel(pix,x_hi,y_lo);         
      // interpolate:
      pixvalue = QuadLerp( cornerUL,
                      cornerUR, 
                      cornerLL,
                      cornerLR,
                      fx,
                      fy);
      
      polarity = (i < 0) ? -1. : 1.;
      
      difference += polarity * pixvalue;
             
      average += pixvalue;
   } // end for loop
   
   // we have three operating modes,
   // given by 3 radio buttons:
   
   if (mode == dialogOperatingModeBlur) 
      return (unsigned8) (average/4.);
      
   if (mode == dialogOperatingModeSketch) {
      sketchValue = 255. - fabs(difference);   
      return (unsigned8) sketchValue;
      }      
      
   // else mode == dialogOperatingModeEmboss:
   return (unsigned8) (difference/2 + 128);
}

The Processing Loop

Finally, we're in a position to show the actual processing loop code, Listing 8. The code looks a bit complicated at first, but really it's quite straightforward. We're basically setting up a double nested loop to traverse all rows and columns in our image chunk, except the margin areas (as explained above). Inside the first loop we make our calls to the host's progress-bar and user-abort-detection functions. We also set up our in and out data pointers.

In the "columns" loop (the inner loop), we get our gradient direction, then transform the pixel, apply histogram and gamma corrections as necessary, blend the transformed pixel with the original pixel to the degree specified by the user (zero to 100 percent), and bump the in and out pointers.

The DoFilterRect() function is called repeatedly in the Start phase after each call to AdvanceState(), until the entire image (or selection) is processed.

Listing 8: DoFilterRect()

DoFilterRect()
void DoFilterRect (GPtr globals, 
   const Boolean notCalledFromUserDialog)
{
   short i, j;
   short plane, expectedPlanes = 0;
   const short columns = 
      gStuff->outRect.right - gStuff->outRect.left - 
      2 * XMARGIN;
   const short rows = 
      gStuff->outRect.bottom - gStuff->outRect.top - 
      2 * YMARGIN;
   double userRadius = gRadius;
   unsigned8 *srcPtr = (unsigned8 *) gStuff->inData;
   unsigned8 *dstPtr = (unsigned8 *) gStuff->outData;   
   
   // find out how many color channels:
   expectedPlanes = 
      CSPlanesFromMode(gStuff->imageMode, expectedPlanes);
         
   for (i=0; i < rows; i++)
      {
               // set up pointers
      srcPtr = (unsigned8 *) gStuff->inData + 
         ((i + YMARGIN) * gStuff->inRowBytes) + 
         gStuff->planes * XMARGIN);
      dstPtr = (unsigned8 *) gStuff->outData + 
         ((i + YMARGIN) * gStuff->outRowBytes) + 
         gStuff->planes * XMARGIN);
      
      // Suppress or show progress bar as necessary:
      if (notCalledFromUserDialog == true)    
         UpdateProgress ((long) i,(long) rows);

      if (TestAbort ()) // check for user abort
         {
            gResult = userCanceledErr;
            return;
         }   
         
      for (j = 0; j < columns; j++)
      {
         unsigned32 srcPlaneJump = 
            gStuff->inHiPlane - gStuff->inLoPlane + 1;
         float gradientCos,gradientSin;
         unsigned8 ch;
         
         GetGradientDirection(    
               (unsigned8 *)((long)srcPtr+plane), 
               srcPlaneJump,
               (int32) gStuff->inRowBytes,
               userRadius,
               &gradientCos, &gradientSin );
                        
         for (plane = 0 ; plane < expectedPlanes; plane++) 
         {
            ch = TransformPixel( 
               (unsigned8 *)((long)srcPtr+plane), // ptr to data
               srcPlaneJump,                  // bytes per pixel
               gStuff->inRowBytes,         // bytes per raster line
               userRadius,                    // theRadius of influence
               userMode,                      // mode (sketch, blur, etc.)
               gradientCos,gradientSin);            
               
            // look up the "equalized" value?
            if (gUseEqualizedHistogram)          
               ch = (unsigned8) gParams->histogram[ch];
               
            // gamma-correct:
            ch = BiasPixel((float)ch,BIAS_DEFAULT);
                  
            // blend the transformed pixel with original pixel:
            dstPtr[plane] = 
               Lerp((float)srcPtr[plane],ch,(float)gBlendFactor);
         }   // for plane
               
         srcPtr += srcPlaneJump; // bump pointers
         dstPtr += srcPlaneJump;
      }   
   }
}

Gamma Adjustment

In Latté, we implement gamma with a "bias" macro, which looks like this:

#define mBias(pixel,adjustmentValue) \
   (pow((double)(adjustmentValue),log((double)pixel)/log(0.5)))

To achieve less gamma, we just use mBias() with an adjustment value in the range 0.0 to 0.5. For more gamma, we call mBias() with an adjustment value in the range 0.5 to 1.0. (See Figures 8 and 9.) A bias adjustment of 0.5 simply remaps the data to itself with no change.



Figures 8 and 9. Bias provides an intuitive mechanism for adjusting gamma, because values cover the domain of zero to 1.0, with 0.5 remapping data to itself with no change.

Histogram Equalization

Histogram equalization works like this. We assume that there should (ideally) be roughly equal numbers of pixels for each luma value in the picture; pixel values in "clumpy" areas should be spaced out more evenly. If we step through a pixel-population table one step at a time, and accumulate a running total of the number of pixels, we can construct a lookup table that more closely adheres to our desired distribution criteria by doing something like the following:

for (runningTotal = a = 0; a < 256; a++)
{
   runningTotal += population[ a ];    // accumulate the pixel count

   ratio = (float) runningTotal / totalPixels;

   // now build a new lookup table
   lookupTable[ a ] = 255.0 * (float) ratio;
}

Later, when we need to write pixels to our output buffer, we go through the lookup table to convert "raw" pixel values into properly equalized values.

Note that the lookup table can be typed as unsigned8 or unsigned char, but the population table (with bins for the number of pixels of each brightness level) should be declared an array of type long or double, because in a large image it's easily possible to have more than 32Kbytes in a bin, and you don't want the bin counts to roll over.

The histogram routines are called in our plug-in at the beginning of every Start phase, in StartWithAdvanceState(), Listing 6. The histogram table is built by looping over the pixels in the image; then EqualizeHistogram() is called; then the image is processed. In our actual code (full project online at <ftp://www.mactech.com>), the histogram is tallied by subsampling every fourth pixel of the source image. This concession to speed is not as sloppy as it seems, because for most images, it gives a histogram negligibly different from that obtained by scrupulous sampling of every pixel. Likewise, the decision to accumulate all the histogram results (from all color channels) into one big histogram table, rather than into separate tables for each color plane, is justified on the basis that the visual result is as good or better than with Photoshop's own Equalize command.

Conclusion

As promised, we've convered a rather large amount of ground in a short space (if you can call a 15,000-word, two-part article "short"). But we've still only scratched the surface where Photoshop plug-ins are concerned. We've only hinted at the power of "area operators" (2D convolutions) and we said precious little about harnessing Photoshop's versatile layers and alpha channel properties. And we didn't say anything at all about making plug-ins PICA compliant (per Adobe's Plug-In Component Architecture) or scripting-aware. (Maybe in future articles?) For more information on these topics, be sure to see Adobe's SDK documentation (which consists of 1,000 pages of information in .pdf format) as well as the SDK example code - all 17 megabytes of it.

With what you know now, you should be able to bootstrap your way through Adobe's example code very quickly - and maybe write a best-selling OCR filter, QuickTime animation tool, 3D design module, or other mind-warping masterpiece of code. (Be sure to e-mail me a copy when you're done.) But be careful. Once you get started writing plug-ins, you just may become addicted.


Kas Thomas <tbo@earthlink.net> has been writing plug-ins for Photoshop since version 2.0. He is the author of the four-star-rated (by ZDNet) Callisto 3D shareware plug-in, available at http://users.aol.com/Callisto3D and elsewhere.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Whitethorn Games combines two completely...
If you have ever gone fishing then you know that it is a lesson in patience, sitting around waiting for a bite that may never come. Well, that's because you have been doing it wrong, since as Whitehorn Games now demonstrates in new release Skate... | Read more »
Call of Duty Warzone is a Waiting Simula...
It's always fun when a splashy multiplayer game comes to mobile because they are few and far between, so I was excited to see the notification about Call of Duty: Warzone Mobile (finally) launching last week and wanted to try it out. As someone who... | Read more »
Albion Online introduces some massive ne...
Sandbox Interactive has announced an upcoming update to its flagship MMORPG Albion Online, containing massive updates to its existing guild Vs guild systems. Someone clearly rewatched the Helms Deep battle in Lord of the Rings and spent the next... | Read more »
Chucklefish announces launch date of the...
Chucklefish, the indie London-based team we probably all know from developing Terraria or their stint publishing Stardew Valley, has revealed the mobile release date for roguelike deck-builder Wildfrost. Developed by Gaziter and Deadpan Games, the... | Read more »
Netmarble opens pre-registration for act...
It has been close to three years since Netmarble announced they would be adapting the smash series Solo Leveling into a video game, and at last, they have announced the opening of pre-orders for Solo Leveling: Arise. [Read more] | Read more »
PUBG Mobile celebrates sixth anniversary...
For the past six years, PUBG Mobile has been one of the most popular shooters you can play in the palm of your hand, and Krafton is celebrating this milestone and many years of ups by teaming up with hit music man JVKE to create a special song for... | Read more »
ASTRA: Knights of Veda refuse to pump th...
In perhaps the most recent example of being incredibly eager, ASTRA: Knights of Veda has dropped its second collaboration with South Korean boyband Seventeen, named so as it consists of exactly thirteen members and a video collaboration with Lee... | Read more »
Collect all your cats and caterpillars a...
If you are growing tired of trying to build a town with your phone by using it as a tiny, ineffectual shover then fear no longer, as Independent Arts Software has announced the upcoming release of Construction Simulator 4, from the critically... | Read more »
Backbone complete its lineup of 2nd Gene...
With all the ports of big AAA games that have been coming to mobile, it is becoming more convenient than ever to own a good controller, and to help with this Backbone has announced the completion of their 2nd generation product lineup with their... | Read more »
Zenless Zone Zero opens entries for its...
miHoYo, aka HoYoverse, has become such a big name in mobile gaming that it's hard to believe that arguably their flagship title, Genshin Impact, is only three and a half years old. Now, they continue the road to the next title in their world, with... | Read more »

Price Scanner via MacPrices.net

B&H has Apple’s 13-inch M2 MacBook Airs o...
B&H Photo has 13″ MacBook Airs with M2 CPUs and 256GB of storage in stock and on sale for up to $150 off Apple’s new MSRP, starting at only $849. Free 1-2 day delivery is available to most US... Read more
M2 Mac minis on sale for $100-$200 off MSRP,...
B&H Photo has Apple’s M2-powered Mac minis back in stock and on sale today for $100-$200 off MSRP. Free 1-2 day shipping is available for most US addresses: – Mac mini M2/256GB SSD: $499, save $... Read more
Mac Studios with M2 Max and M2 Ultra CPUs on...
B&H Photo has standard-configuration Mac Studios with Apple’s M2 Max & Ultra CPUs in stock today and on Easter sale for $200 off MSRP. Their prices are the lowest available for these models... Read more
Deal Alert! B&H Photo has Apple’s 14-inch...
B&H Photo has new Gray and Black 14″ M3, M3 Pro, and M3 Max MacBook Pros on sale for $200-$300 off MSRP, starting at only $1399. B&H offers free 1-2 day delivery to most US addresses: – 14″ 8... Read more
Department Of Justice Sets Sights On Apple In...
NEWS – The ball has finally dropped on the big Apple. The ball (metaphorically speaking) — an antitrust lawsuit filed in the U.S. on March 21 by the Department of Justice (DOJ) — came down following... Read more
New 13-inch M3 MacBook Air on sale for $999,...
Amazon has Apple’s new 13″ M3 MacBook Air on sale for $100 off MSRP for the first time, now just $999 shipped. Shipping is free: – 13″ MacBook Air (8GB RAM/256GB SSD/Space Gray): $999 $100 off MSRP... Read more
Amazon has Apple’s 9th-generation WiFi iPads...
Amazon has Apple’s 9th generation 10.2″ WiFi iPads on sale for $80-$100 off MSRP, starting only $249. Their prices are the lowest available for new iPads anywhere: – 10″ 64GB WiFi iPad (Space Gray or... Read more
Discounted 14-inch M3 MacBook Pros with 16GB...
Apple retailer Expercom has 14″ MacBook Pros with M3 CPUs and 16GB of standard memory discounted by up to $120 off Apple’s MSRP: – 14″ M3 MacBook Pro (16GB RAM/256GB SSD): $1691.06 $108 off MSRP – 14... Read more
Clearance 15-inch M2 MacBook Airs on sale for...
B&H Photo has Apple’s 15″ MacBook Airs with M2 CPUs (8GB RAM/256GB SSD) in stock today and on clearance sale for $999 in all four colors. Free 1-2 delivery is available to most US addresses.... Read more
Clearance 13-inch M1 MacBook Airs drop to onl...
B&H has Apple’s base 13″ M1 MacBook Air (Space Gray, Silver, & Gold) in stock and on clearance sale today for $300 off MSRP, only $699. Free 1-2 day shipping is available to most addresses in... Read more

Jobs Board

Senior Product Associate - *Apple* Pay (AME...
…is seeking a Senior Associate of Digital Product Management to support our Apple Pay product team. Labs drives innovation at American Express by originating, Read more
Medical Assistant - Surgical Oncology- *Apple...
Medical Assistant - Surgical Oncology- Apple Hill Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Read more
Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
Operations Associate - *Apple* Blossom Mall...
Operations Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.