SGMLXML.net A place for SGML and XML application developers.

June 3, 2016

Shenzhen for viewing Mobile Provision Information

Filed under: iOS,Software — cangione @ 8:32 am

Shenzhen for viewing Mobile Provision Information

Highly recommend this open source program for MACs to disply the embedded .mobileprovision Information in IPA files:

https://github.com/nomad/shenzhen/blob/master/README.md
 

This allows you to look at how an ipa file was signed and when it expires without having to unpack the file.
 
It’s easy to install from a command line:
 

        sudo gem install shenzhen

 
Then you can run a command like this:
 

        ipa info MYAPP.ipa 

Example Result:

Result

December 29, 2015

Removing the right hand pane in Adobe Acrobat Reader DC

Filed under: Rants/Musings,Software — cangione @ 9:09 pm

The right hand pane in Adobe Acrobat Reader DC takes up a ton of room and is irrelevant to my life. Here is a hack to stop it from appearing:

  1. Go to the install directory, i.e.” C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroApp\ENU“.
  2. Create a new subfolder (I used “Disabled“).
  3. Move 3 files from the “ENU” folder into the new “Disabled” folder:
    • AppCenter_R.aapp
    • Home.aapp
    • Viewer.aapp

September 24, 2014

Technique for finding invalid bookmarks in a PDF file

Filed under: Software — cangione @ 3:42 pm

Technique for finding invalid bookmarks in a PDF file

I have been encountering invalid bookmarks on a regular basis within PDF files lately. I went looking for a technique that would allow me to quickly determine if there were any broken bookmarks in a PDF file.

The solution I came up with involves using pdftk [https://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/] and your favorite MAC or Linux command line:

pdftk mypdf.pdf dump_data | grep -B 2 "PageNumber: 0"

Results:

BookmarkTitle: 10.2 Oxygen equipment
BookmarkLevel: 3
BookmarkPageNumber: 0
--
BookmarkTitle: 10.2.1 Chemical oxygen system:
BookmarkLevel: 4
BookmarkPageNumber: 0

July 27, 2014

Markdown

Filed under: Software — Tags: — cangione @ 9:29 am

Markdown

I have never been a fan of the heavy UI for writing. I find tools like Word distracting, and yet a simple text editor doesn’t have enough power to justify the time in creating content that will require you to go back and format it when your done.

The majority of my content is created initially for:

  • Email
  • White papers
  • Blogs
  • Wikis or online notebooks

If the content created becomes valuable enough, it will ultimately end up as a DITA topic for reuse in some future structured creation.

I have been looking for a markup language that will allow me to quickly support the various uses and evolution of my content without formatting distractions during creation. The Markdown language was created in 2004 with the goal of allowing people to write using an easy-to-read, easy-to-write plain text format, and optionally convert it to structurally valid XHTML.

Additional Information

January 20, 2011

Towards a Better Understanding of Software Customization

Filed under: Software — cangione @ 8:09 pm

Packaged software simplifies upgrades and maintenance because of its singularity. Support and adoption also are simplified because the single system is well known. Customization of software, by definition, is the modification of packaged software to meet individual requirements. Various valid enterprise requirements lead to customization of packaged software. In general, the benefits as well as shortcomings, of available customization techniques are poorly understood and improperly lumped together into an all-encompassing perception that all customizations inhibit evolvability.

Maintenance, or evolution, is the longest and most expensive phase of the application lifecycle. Once released, software has to be corrected and updated. Evolvability is the key metric used to judge an applications ability to incorporate updates and new revisions of packaged software while maintaining required customizations throughout the application lifecycle. This post defines three levels of software customization and their impact on application evolvability.

The Three Levels of Software Customization

An application may be defined as the combination of packaged software and customizations required to support an end user in performing user specific tasks efficiently.

Individualization

The first level of customization, known as Individualization, is commonly referred to as “customization through configuration”. Important aspects of corporate-identity such as the corporate logo and corporate color scheme should reflect a corporate design. Packaged software should provide different options to different user groups. Reports should reflect company identity and reflect information necessary to support an organizations processes, workflow and individuality.

This first level of customization almost always take place for enterprise applications and tends to be a non-controversial practice. Properly executed, Individualization is well understood and requires a fairly low degree of effort to implement and maintain over the application’s evolvable lifespan.

Key metrics for Individualization are the number of properties, options and configuration settings changed from the packaged software installation baseline.

Tailoring

The second level of customization is Tailoring and represents a stable middle ground for the applications continued evolvability. Packaged software comes with built-in assumptions and procedures about organizations’ business processes. These assumptions and procedures seldom match exactly with those of the implementing organization’s existing processes. Therefore, most implementation projects involve some degree of software tailoring so that the software will fit current organizational processes. Tailoring may involve module selection, table configuration, or the addition of encapsulated new user functions.

In Module Selection, companies choose to implement one or more modules of a software application. In this case, customization is achieved through the company’s module selection. A Key metric for module selection is the number of implemented modules versus the total number of modules available.

Table Configuration, another tailoring technique allows an enterprise to eliminate features that are not pertinent to a give task and tailor necessary features to better suite the given task such as selecting more appropriate application defaults, or by using task-specific vocabulary in the application. A key metric for Table Configuration is the number of fields configured in each application table.

Tailoring by using Encapsulated User Functions may be broken into five categories: external input types, external output types, logical internal types, external interface types and external inquiry types.

All of these second level tailoring customization techniques utilize software “open points” built into the applications framework. Software open points, popularly known as application programming interfaces (APIs), permit changing a software system by exposing internal components in an explicit and malleable way for deployment of new or missing functions to users. Properly using these open points to extend or enhance packaged software’s built-in behaviors adds the requirement for proper development and test environments and a higher degree of skill usually provided by the system integrator.

The degree to which second level customizations can easily be maintained over the application’s evolvable lifespan largely depends on the resiliency of the exposed APIs. In mature packaged software, APIs tend to be relentlessly maintained, making tailored customizations predictable during future software package/customization deployment cycles. A metric measuring the number and frequency of change to established APIs will provide insight into the stability and resiliency of the applications open points.

Core Code Changes and New Custom Software Modules

The third level of customization involves Core Code Changes and New Custom Software Module additions to packaged software. This level of customization brings with it the complexities of true software development, integration and testing during each revision/upgrade cycle. Customizations at this level frequently undermine the confidence in the packaged software’s integrity and the overall application’s evolvability.

The number and frequency of these third level changes should be carefully tracked. A high number of these changes indicates that the packaged software may not be suitable for an application. Additionally, adoption effort is an important metric with this level of customization since there is no baseline training or documentation upon which to rely.

A healthy evolvability metric for an application with third level customizations would be a downward trend in the number of third level customizations needed as newly available package software modules become available in future package/customization deployment cycles.

Conclusion

Evolvability should be an intrinsic metric in an application’s design, initial implementation and maintenance since healthy enterprise applications evolve over time. Each customization to packaged software can be categorized and measured. Properly implemented first and second level customizations provide an acceptable level of evolvability for an application. Third level customizations, while initially necessary, typically impact evolvability and should trend downward over the lifespan of a project while first and second level customization may trend upward to accommodate the configuration and tailoring of these new replacement modules.

Charles Angione

March 18, 2010

Large Volumes of Mission Critical Information

Filed under: Software,XML — cangione @ 6:07 am

Like most people I’m frequently asked what I do for a living. I tell people that I work with companies that have a large quantity of mission critical information that they need to be able to find in an instant and that information better be right every time.

The Economist magazine just had a really good special report on The data deluge that individuals, companies and governments face.

Some things that resonated with me from the report:

In 1971 Herbert Simon an economist wrote

What information consumes is rather obvious: it consumes the attention of its recipients“.

I like his conclusion

Hence a wealth of information creates a poverty of attention.

The term “Data exhaust” was used to define the trail of Internet user clicks that are left behind from a transaction. This exhaust can be mined and useful. Google refining their search engine to take into account the number of clicks on an item to help determine search relevance is one example of using data smog. I really like this “Data exhaust” term and believe it fits well with trying to make sense of large data sets. Smoggy areas could indicate that instructions are not clear enough in service documentation or properly mined, it could also indicate an impending issue with a particular component in a product.

“Delete” written by Viktor Mayer-Schönberger argues systems should “forget” portions of a digital record over time. Systems could be designed so that parts of digital files could degrade over time to protect privacy yet items that remain could possibility benefit all of human kind. The concept of donating your digital corpse (medical reports, test results etc.) to science comes to mind as a good example of this concept. While I might not want people to be able to link my name to my medical records, the records themselves with no name attached would provide a lifetime of data that could be used to advance lots of different fields.

Being able to consistently create the right set of rules for the ethical use of various types of data exhaust will be tricky. The article in the Economist mentions six broad principles for an age of big data sets that I liked:

  1. Privacy
  2. Security
  3. Retention
  4. Processing
  5. Ownership
  6. Integrity of Information

C. Angione

August 29, 2009

Full Screen mode in Firefox with the Windows toolbar present

Filed under: Rants/Musings,Software — cangione @ 5:54 am

I’ve always been a big fan of Firefox. The Firefox full screen (F11) mode that covers the Windows toolbar at the bottom of the screen and removes the top window bar is great when reading space is at a premium like on a small laptop. There are times when I want the same look as the Full Screen mode but want the Windows toolbar visible at the bottom so I can quickly switch tasks.

I added a plugin called userChromeJS that allowed me to get rid of the top window bar but leave the Windows toolbar at the bottom. Once you install the plugin add the following function to the userChrome.js file.

function hideChrome() {
if (navigator.platform == "Win32") {
window.moveTo(0,0);
window.maximize();
document.getElementById("main-window").setAttribute('hidechrome','true');
// preserve small area for taskbar to appear
window.resizeTo(screen.availWidth, screen.availHeight-2);
} else {
document.getElementById("main-window").setAttribute('hidechrome','true');
window.moveTo(0,0);
window.resizeTo(screen.availWidth, screen.availHeight);
window.maximize();
}
}
hideChrome();

userChrome.JS is located in your userprofile\AppData\Roaming\Mozilla\Firefox\Profiles\.default\chrome directory.

February 13, 2009

Controling Multiple Computers Without a Keyboard Switch

Filed under: Linux,Software — cangione @ 3:57 pm

When I’m working in my home office, I typically have my Windows XP laptop on the left hand side of my desk next the rest of my monitors. I’ve always wanted to control the laptop from my main keyboard and mouse without having to hook up a keyboard switch. I find the hardware solution annoying when you are trying to work on both machines essentially at once or when you just want to pick up your laptop and sit somewhere else (lazy I know). When I drag my mouse off the left side of my Linux Desktop I want to control my laptop. When I drag my cursor off the right side of my laptop screen I want to resume working with my Linux Desktop. Essentially I want to seamlessly make my Windows XP Desktop part of my Linux Desktop.

There is an open source project called Synergy that functions exactly this way over your TCP/IP network. The program redirects the mouse and keyboard as you move the mouse off the edge of a screen. Synergy also merges the clipboards on each system into one, allowing you to copy-and-paste between systems. Handy for copying stuff out of e-mail!

Like all good open source projects the program works on any combination of Linux, Windows XP and MAC OS X 10.2 and higher.

Enjoy!

Charles Angione

February 10, 2009

Building a Modern 64-bit Operating System

Filed under: Software — cangione @ 6:55 pm

I recently upgraded the memory in one of my home workstations to 4 gigs. I’ve always wondered what it would be like to run a 64-bit OS on this AMD64 system and decided to refresh the entire machine to take full advantage of the memory upgrade. I decided to go with the latest 64-bit Ubuntu Linux distribution codenamed Intrepid Ibex (8.10) as my base OS. As I put this upgraded machine through its paces, I’m a bit amazed at what the machine is now capable of.

Because of the development, conversion and simulation work that I do, I tend to demand a lot from my computers. My workstation runs at least one virtual operating system in VMware server as well as the work I do on the host OS. There are times when the virtual OS and the host OS are really crunching away that things slow to a crawl…..that’s being kind. The computer is essentially unusable for day to day activities like e-mail and web browsing.

I intentionally tried to crush the machine I upgraded by running VMware server with two Windows virtual machines that had CPU intensive tasks. While that was going on I did normal things like surfing with Firefox, writing an e-mail and remote connecting to other systems. I didn’t see any lag or slowdown from the UI. Everything is very smooth and I’ve become a believer!

I admit that 64-bit operating systems may not be practical for simple desktop use at this point. Not all applications run on 64-bit systems, but you can run 32 bit virtual operating systems within a 64 bit host system.

If you think today’s computers are fast, wait until they have a 64-bit OS! It isn’t about megahertz anymore — it’s about actually doubling the amount of data a CPU can crunch per clock cycle. I’ve concluded that a 64-bit chip and a 64-bit OS does have the power to dramatically improve the performance of your more demanding applications. It revolutionizes what a single workstation can do.

Happy Crunching!

Charles Angione

I’ve listed a couple of solutions to some hassles I ran into in my setup:

September 29, 2008

Using UML to describe DITA Specializations

Filed under: Rants/Musings,Software,XML — cangione @ 11:29 am

As a consultant, I’m often parachuted into complex projects and need to be able to appear intelligent in a short period of time to both technical staff as well as business people. In setting up a publishing system based on the Darwin Information Typing Architecture (DITA), I’m faced with trying to communicate the specialized structures and element names needed by an organization.

One of the most important principals of DITA specialization is the concept of inheritance. Inheritance allows you to use the structures and semantics previously defined by others as a starting point for your specializations. But how do you communicate this idea in a repeatedly simple clear concise manner during the design process? If a picture is worth a thousand words, a UML model is invaluable.

The Unified Modeling Language (UML) is a graphical notation that is particularly good at expressing object-oriented designs. UML went through a standardization process and is now an Object Management Group (OMG) standard.

I like using UML because it allows me to communicate certain concepts more clearly than alternatives like natural language. Natural language is too imprecise and subject to interpretation by the reader. A DITA DTD or Schema is very precise but not something that should be created during the design process. So I use UML to communicate and keep track of the important details like inheritance.

A lot of people talk about the learning curve associated with UML. I’m not advocating that team members need to be exposed to all that UML can provide. Only the parts necessary to convey the important details of the moment.

Let’s say that we want to build a new type of DITA topic for creating slide presentations. The following diagram conveys a lot of information:

Slide Example UML Diagram

My conversation with the assembled audience of technical staff and business users would go something like this:

Each green ellipse represents an element already in existence. Each yellow ellipse represents a proposed element. The double angle brackets in the ellipse are used in UML to define sterotypes. A sterotype is the vocabulary extension mechanism built into UML. In our case we are using sterotypes to indicate which organization (DITA or SGMLXML) an element reports to. The dotted lines represent which elements are included in others and which elements extend a base type (inheritance).

Once the audience agrees that the model is correct, it can be included in the specifications and developers can create the new elements and the properly-formed class attributes required.

In our example the definition of the class attributes in the resulting DTD would look like this:

Class Attributes

Besides communicating DITA specialization information, I also use UML for other aspects of my deployments such as use case design, system deployment and various activity diagrams. No single model is sufficient to build the system but by using the same modeling language for all the models, it is easy to impart all of the important analysis, design and implementation decisions that must be made by an organization before deployment.

-Charles Angione

Older Posts »

Powered by WordPress