Portable Network Graphics (PNG)

November 1st, 2012

I like Portable Network Graphics for bitmap images. PNGs improve on the GIF format and are presentable in either PDF or web-based formats.

I was dealing with a client’s data this week that included PNG graphics. The output quality was really poor but when I opened each graphic in a program like GIMP the image quality was fine. Turns out the source PNG files were corrupt. GIMP was fixing them before rendering them to the screen.

I found the corruption by using a Linux based command line utility called pngcheck.

20121102-004012.jpg

Once I discovered the problem, I used another command line utility called pngcrush with a shell script to loop through all the graphics and fix the errors.

20121102-003118.jpg

I love command lines!

C. Angione

Determining if XML Content should be printed on 8.5 x 11 or A4 Paper

August 18th, 2011

I saw a clever style sheet implementation this week. It determined the correct page layout size using the xml:lang attribute. If xml:lang is set to anything other then “en” use the International ISO standard A4 size!

US Letter is actually the most common paper size for office use in several other countries, including Canada, Mexico, Bolivia, Colombia, Venezuela, the Philippines, and Chile so there are a few more language codes that should be added for the US Letter size if you are doing business there.

Towards a Better Understanding of Software Customization

January 20th, 2011

Packaged software simplifies upgrades and maintenance because of its singularity. Support and adoption also are simplified because the single system is well known. Customization of software, by definition, is the modification of packaged software to meet individual requirements. Various valid enterprise requirements lead to customization of packaged software. In general, the benefits as well as shortcomings, of available customization techniques are poorly understood and improperly lumped together into an all-encompassing perception that all customizations inhibit evolvability.

Maintenance, or evolution, is the longest and most expensive phase of the application lifecycle. Once released, software has to be corrected and updated. Evolvability is the key metric used to judge an applications ability to incorporate updates and new revisions of packaged software while maintaining required customizations throughout the application lifecycle. This post defines three levels of software customization and their impact on application evolvability.

The Three Levels of Software Customization

An application may be defined as the combination of packaged software and customizations required to support an end user in performing user specific tasks efficiently.

Individualization

The first level of customization, known as Individualization, is commonly referred to as “customization through configuration”. Important aspects of corporate-identity such as the corporate logo and corporate color scheme should reflect a corporate design. Packaged software should provide different options to different user groups. Reports should reflect company identity and reflect information necessary to support an organizations processes, workflow and individuality.

This first level of customization almost always take place for enterprise applications and tends to be a non-controversial practice. Properly executed, Individualization is well understood and requires a fairly low degree of effort to implement and maintain over the application’s evolvable lifespan.

Key metrics for Individualization are the number of properties, options and configuration settings changed from the packaged software installation baseline.

Tailoring

The second level of customization is Tailoring and represents a stable middle ground for the applications continued evolvability. Packaged software comes with built-in assumptions and procedures about organizations’ business processes. These assumptions and procedures seldom match exactly with those of the implementing organization’s existing processes. Therefore, most implementation projects involve some degree of software tailoring so that the software will fit current organizational processes. Tailoring may involve module selection, table configuration, or the addition of encapsulated new user functions.

In Module Selection, companies choose to implement one or more modules of a software application. In this case, customization is achieved through the company’s module selection. A Key metric for module selection is the number of implemented modules versus the total number of modules available.

Table Configuration, another tailoring technique allows an enterprise to eliminate features that are not pertinent to a give task and tailor necessary features to better suite the given task such as selecting more appropriate application defaults, or by using task-specific vocabulary in the application. A key metric for Table Configuration is the number of fields configured in each application table.

Tailoring by using Encapsulated User Functions may be broken into five categories: external input types, external output types, logical internal types, external interface types and external inquiry types.

All of these second level tailoring customization techniques utilize software “open points” built into the applications framework. Software open points, popularly known as application programming interfaces (APIs), permit changing a software system by exposing internal components in an explicit and malleable way for deployment of new or missing functions to users. Properly using these open points to extend or enhance packaged software’s built-in behaviors adds the requirement for proper development and test environments and a higher degree of skill usually provided by the system integrator.

The degree to which second level customizations can easily be maintained over the application’s evolvable lifespan largely depends on the resiliency of the exposed APIs. In mature packaged software, APIs tend to be relentlessly maintained, making tailored customizations predictable during future software package/customization deployment cycles. A metric measuring the number and frequency of change to established APIs will provide insight into the stability and resiliency of the applications open points.

Core Code Changes and New Custom Software Modules

The third level of customization involves Core Code Changes and New Custom Software Module additions to packaged software. This level of customization brings with it the complexities of true software development, integration and testing during each revision/upgrade cycle. Customizations at this level frequently undermine the confidence in the packaged software’s integrity and the overall application’s evolvability.

The number and frequency of these third level changes should be carefully tracked. A high number of these changes indicates that the packaged software may not be suitable for an application. Additionally, adoption effort is an important metric with this level of customization since there is no baseline training or documentation upon which to rely.

A healthy evolvability metric for an application with third level customizations would be a downward trend in the number of third level customizations needed as newly available package software modules become available in future package/customization deployment cycles.

Conclusion

Evolvability should be an intrinsic metric in an application’s design, initial implementation and maintenance since healthy enterprise applications evolve over time. Each customization to packaged software can be categorized and measured. Properly implemented first and second level customizations provide an acceptable level of evolvability for an application. Third level customizations, while initially necessary, typically impact evolvability and should trend downward over the lifespan of a project while first and second level customization may trend upward to accommodate the configuration and tailoring of these new replacement modules.

-Charles Angione

A Simple But Useful 3D Model Use Case

August 5th, 2010

In speaking with a customer today about 3D models and their role in service information, they described a really simple but powerful use case. Part of the power of being able to rotate 3D models is not seeing the entire model from lots of different angles but from your current physical perspective.

Two scenarios were discussed:
1) A part currently in a service technicians hand and positioning the 3D model in the same orientation.
2) A person looking at the right-hand side of a complete piece of equipment when a 2D illustration only showed the left.

The utility of associating the virtual model with the physical product is obvious to me now, but I couldn’t have put it into words before today.

-CA

Large Volumes of Mission Critical Information

March 18th, 2010

Like most people I’m frequently asked what I do for a living. I tell people that I work with companies that have a large quantity of mission critical information that they need to be able to find in an instant and that information better be right every time.

The Economist magazine just had a really good special report on The data deluge that individuals, companies and governments face.

Some things that resonated with me from the report:

In 1971 Herbert Simon an economist wrote

What information consumes is rather obvious: it consumes the attention of its recipients“.

I like his conclusion

Hence a wealth of information creates a poverty of attention.

The term “Data exhaust” was used to define the trail of Internet user clicks that are left behind from a transaction. This exhaust can be mined and useful. Google refining their search engine to take into account the number of clicks on an item to help determine search relevance is one example of using data smog. I really like this “Data exhaust” term and believe it fits well with trying to make sense of large data sets. Smoggy areas could indicate that instructions are not clear enough in service documentation or properly mined, it could also indicate an impending issue with a particular component in a product.

“Delete” written by Viktor Mayer-Schönberger argues systems should “forget” portions of a digital record over time. Systems could be designed so that parts of digital files could degrade over time to protect privacy yet items that remain could possibility benefit all of human kind. The concept of donating your digital corpse (medical reports, test results etc.) to science comes to mind as a good example of this concept. While I might not want people to be able to link my name to my medical records, the records themselves with no name attached would provide a lifetime of data that could be used to advance lots of different fields.

Being able to consistently create the right set of rules for the ethical use of various types of data exhaust will be tricky. The article in the Economist mentions six broad principles for an age of big data sets that I liked:

  1. Privacy
  2. Security
  3. Retention
  4. Processing
  5. Ownership
  6. Integrity of Information

C. Angione

Full Screen mode in Firefox with the Windows toolbar present

August 29th, 2009

I’ve always been a big fan of Firefox. The Firefox full screen (F11) mode that covers the Windows toolbar at the bottom of the screen and removes the top window bar is great when reading space is at a premium like on a small laptop. There are times when I want the same look as the Full Screen mode but want the Windows toolbar visible at the bottom so I can quickly switch tasks.

I added a plugin called userChromeJS that allowed me to get rid of the top window bar but leave the Windows toolbar at the bottom. Once you install the plugin add the following function to the userChrome.js file.

function hideChrome() {
if (navigator.platform == "Win32") {
window.moveTo(0,0);
window.maximize();
document.getElementById("main-window").setAttribute('hidechrome','true');
// preserve small area for taskbar to appear
window.resizeTo(screen.availWidth, screen.availHeight-2);
} else {
document.getElementById("main-window").setAttribute('hidechrome','true');
window.moveTo(0,0);
window.resizeTo(screen.availWidth, screen.availHeight);
window.maximize();
}
}
hideChrome();

userChrome.JS is located in your userprofile\AppData\Roaming\Mozilla\Firefox\Profiles\.default\chrome directory.

Controling Multiple Computers Without a Keyboard Switch

February 13th, 2009

When I’m working in my home office, I typically have my Windows XP laptop on the left hand side of my desk next the rest of my monitors. I’ve always wanted to control the laptop from my main keyboard and mouse without having to hook up a keyboard switch. I find the hardware solution annoying when you are trying to work on both machines essentially at once or when you just want to pick up your laptop and sit somewhere else (lazy I know). When I drag my mouse off the left side of my Linux Desktop I want to control my laptop. When I drag my cursor off the right side of my laptop screen I want to resume working with my Linux Desktop. Essentially I want to seamlessly make my Windows XP Desktop part of my Linux Desktop.

There is an open source project called Synergy that functions exactly this way over your TCP/IP network. The program redirects the mouse and keyboard as you move the mouse off the edge of a screen. Synergy also merges the clipboards on each system into one, allowing you to copy-and-paste between systems. Handy for copying stuff out of e-mail!

Like all good open source projects the program works on any combination of Linux, Windows XP and MAC OS X 10.2 and higher.

Enjoy!

Charles Angione

Building a Modern 64-bit Operating System

February 10th, 2009

I recently upgraded the memory in one of my home workstations to 4 gigs. I’ve always wondered what it would be like to run a 64-bit OS on this AMD64 system and decided to refresh the entire machine to take full advantage of the memory upgrade. I decided to go with the latest 64-bit Ubuntu Linux distribution codenamed Intrepid Ibex (8.10) as my base OS. As I put this upgraded machine through its paces, I’m a bit amazed at what the machine is now capable of.

Because of the development, conversion and simulation work that I do, I tend to demand a lot from my computers. My workstation runs at least one virtual operating system in VMware server as well as the work I do on the host OS. There are times when the virtual OS and the host OS are really crunching away that things slow to a crawl…..that’s being kind. The computer is essentially unusable for day to day activities like e-mail and web browsing.

I intentionally tried to crush the machine I upgraded by running VMware server with two Windows virtual machines that had CPU intensive tasks. While that was going on I did normal things like surfing with Firefox, writing an e-mail and remote connecting to other systems. I didn’t see any lag or slowdown from the UI. Everything is very smooth and I’ve become a believer!

I admit that 64-bit operating systems may not be practical for simple desktop use at this point. Not all applications run on 64-bit systems, but you can run 32 bit virtual operating systems within a 64 bit host system.

If you think today’s computers are fast, wait until they have a 64-bit OS! It isn’t about megahertz anymore — it’s about actually doubling the amount of data a CPU can crunch per clock cycle. I’ve concluded that a 64-bit chip and a 64-bit OS does have the power to dramatically improve the performance of your more demanding applications. It revolutionizes what a single workstation can do.

Happy Crunching!

-Charles Angione

I’ve listed a couple of solutions to some hassles I ran into in my setup:

The Digital Realm and Digital Experience – A Ten Year Review

October 12th, 2008

I have a slide show widget that runs on my desktop side bar. The slide show gets pictures from the “My Pictures” folder which contains images from the last ten years. I was an early adopter of digital photography and bought my first digital camera on Guam before a trip deeper into the South Pacific.

Funny story. The first use of my new digital camera was to send pictures back to my company headquarters documenting the horrible shape the equipment we were working on was in! There had been a complaint from the customer about the extra money required to bring things back into operational shape. No more complaints once I sent in the pictures! I knew back then that digital photography would be a powerful medium!

One of those early pictures flashed across my screen tonight and I went to enlarge it. It was taken at a resolution of 640 x 480. Back then you were an advanced user if you were at 800 x 600 on a laptop screen and a picture at 640 x 480 looked pretty good!

When viewed on a modern laptop at 1280 x 800 that same picture looks grainy and like a postage stamp! It is amazing how fast the world has changed from film photography to digital.

Kwaj Sunset 1998
Kwaj Sunset 1998 @ 640 x 480 – 96 dpi

I wish I had my modern day digital camera 10 years ago. The color and depth of the pictures through several zooms now is amazing. I feel like I can crawl into the shots and look around. The shots I took ten years ago are a good test of my memory and imagination without zooming at all. To crawl into those shots and look around, I need to close my eyes and remember what it felt like to be there first hand.

Mount Wire 2008
Mount Wire 2008 @ 3072 x 2304 – 72 dpi

We are in the midst of an incredible revolution with digital High Definition photography and video. It takes the old photo album to a level where we can actually imagine we are there without leaving our living rooms.

I’m all for digital progress and photography yet believe there is something to be said for actually living outside the digital realm every once and awhile. I will cherish both the photos shown here because I was there in 1998 and 2008. For those that did not have these experiences, I still hope you enjoy the photos!

-C. Angione – 1998 & 2008

Using UML to describe DITA Specializations

September 29th, 2008

As a consultant, I’m often parachuted into complex projects and need to be able to appear intelligent in a short period of time to both technical staff as well as business people. In setting up a publishing system based on the Darwin Information Typing Architecture (DITA), I’m faced with trying to communicate the specialized structures and element names needed by an organization.

One of the most important principals of DITA specialization is the concept of inheritance. Inheritance allows you to use the structures and semantics previously defined by others as a starting point for your specializations. But how do you communicate this idea in a repeatedly simple clear concise manner during the design process? If a picture is worth a thousand words, a UML model is invaluable.

The Unified Modeling Language (UML) is a graphical notation that is particularly good at expressing object-oriented designs. UML went through a standardization process and is now an Object Management Group (OMG) standard.

I like using UML because it allows me to communicate certain concepts more clearly than alternatives like natural language. Natural language is too imprecise and subject to interpretation by the reader. A DITA DTD or Schema is very precise but not something that should be created during the design process. So I use UML to communicate and keep track of the important details like inheritance.

A lot of people talk about the learning curve associated with UML. I’m not advocating that team members need to be exposed to all that UML can provide. Only the parts necessary to convey the important details of the moment.

Let’s say that we want to build a new type of DITA topic for creating slide presentations. The following diagram conveys a lot of information:

Slide Example UML Diagram

My conversation with the assembled audience of technical staff and business users would go something like this:

Each green ellipse represents an element already in existence. Each yellow ellipse represents a proposed element. The double angle brackets in the ellipse are used in UML to define sterotypes. A sterotype is the vocabulary extension mechanism built into UML. In our case we are using sterotypes to indicate which organization (DITA or SGMLXML) an element reports to. The dotted lines represent which elements are included in others and which elements extend a base type (inheritance).

Once the audience agrees that the model is correct, it can be included in the specifications and developers can create the new elements and the properly-formed class attributes required.

In our example the definition of the class attributes in the resulting DTD would look like this:

Class Attributes

Besides communicating DITA specialization information, I also use UML for other aspects of my deployments such as use case design, system deployment and various activity diagrams. No single model is sufficient to build the system but by using the same modeling language for all the models, it is easy to impart all of the important analysis, design and implementation decisions that must be made by an organization before deployment.

-Charles Angione