One Unified Global Perspective
Communications with a Global Perspective
Home
Intro
Contact Us
Voice over IP
PBX Solutions
Services
Support
Glossary
Open Source
Blog
Forum

WebMail





2009 Dec 22 - Tue

Global Warming: Should I Be Concerned?

For years, I've heard reports that the Earth is warming up, and as a consequence, something should be done about it. Primary symptoms of warming have been through reports of that Antartica's Ice Cap is reducing in size. A primary contributing factor which has been suggested as a primary contributor has been the combustion of various fossil fuels which added carbon dioxide to the atmosphere, which in turns traps reflected sunlight, which in turn warms the Earth/Atmosphere. Among other things.

I've been lead to believe that that global warming is a bad thing, and should be stopped, or even possibly reversed.

A fellow by the name of Mann recently presented a chart, now known as the 'hockey stick' chart, which shows a significant increase in average global temperatures. Popular media has become enamoured with this chart, particularily recently, what with all the latest brouhaha during the Copenhagen Climate talks.

From what I hear, developing nations want developed nations to give them billions of dollars in either cash or technology cut emissions. If developed nations can't even commit to cutting their own emissions, what is the point in giving some one else money in the illusion that they might also cut emissions.

Would it not be better to just put the hundreds of billions of dollars directly into alternate energy research? Obviously, there will be expensive gaffs along the way, but at least something direct may come of it to assist the developed as well as developing countries.

But then comes along some data that indicates that this may all be a moot point. A number of different scientific observers have put together data from many different sources, with the data indicating that we are by no means currently at a historically high global temperature. The middle ages (around 1000AD) saw higher temperatures than what we are currently experiencing. The time around 0 AD saw even higher temperatures. And 1000 years before that were even higher temperatures. And over the course of history, substantial temperature swings have been noted.

Mann's 'Hockey Stick' may be significant in recent history, but is it significant in the grand scheme of things? Is global warming an event which would have arrived regardless of human meddling?

One way or another, the climate is going to give us some interesting action over the next little while, for some value of little. It looks like I should be concerned, but due to completely different reasons.

For more climate oriented pointers, visit Watts Up With That?.

[/Personal] permanent link


2009 Dec 06 - Sun

ToS/DSCP Cheat Sheet

On the Flow-Tools email list, Craig Weinhold published a cheat sheet for how to treat IP Packet ToS (Type of Service) bits:

**** Pre-1998

The IPv4 ToS byte was part of the original 1981 definition of Internet Protocol 
in RFC 791, which specified a 3-bit precedence value and 3-bits of ToS attributes. 
In the tables below, "tos" values refer to the entire byte. 
In 1992, RFC 1349 added a fourth ToS attribute. 

  0x80  0x40  0x20  0x10  0x08  0x04  0x02  0x01
+-----+-----+-----+-----+-----+-----+-----+-----+
|     PRECEDENCE  |    TOS attributes     |  -  |
+-----+-----+-----+-----+-----+-----+-----+-----+

         PRECEDENCE                     TOS attributes

name            dec tos bin     name              dec tos bin
network           7 224 111     min-delay           8  16 1000
internet          6 192 110     max-throughput      4   8 0100
critical          5 160 101     max-reliability     2   4 0010
flash-override    4 128 100     min-monetary-cost   1   2 0001
flash             3  96 011     normal              0   0 0000
immediate         2  64 010
priority          1  32 001
routine           0   0 000


**** Post-1998 

RFC 2474 reworked the ToS as a 6-bit Differentiated Services Code Point (DSCP) 
and, soon after, RFC 3168 allocated the lowest two bits for 
Error Congestion Notification (ECN, 
an IP analogy of frame-relay FECN and ATM EFCI). 

  0x80  0x40  0x20  0x10  0x08  0x04  0x02  0x01
+-----+-----+-----+-----+-----+-----+-----+-----+
|                DSCP               |    ECN    |
+-----+-----+-----+-----+-----+-----+-----+-----+

                        DSCP

name    dec tos binary      name    dec  tos binary
AF11    10   40 001010      CS1       8   32 001000
AF12    12   48 001100      CS2      16   64 010000
AF13    14   56 001110      CS3      24   96 011000
AF21    18   72 010010      CS4      32  128 100000
AF22    20   80 010100      CS5      40  160 101000
AF23    22   88 010110      CS6      48  192 110000
AF31    26  104 011010      CS7      56  224 111000 
AF32    28  112 011100      EF       46  184 101110
AF33    30  120 011110      default   0    0 000000
AF41    34  136 100010      
AF42    36  144 100100      AF = assured forwarding
AF43    38  152 100110      EF = expedited forwarding
                            CS = class selector

   ECN (unrelated to QoS)
   00   Not-ECT  Not ECN-Capable Transport
   01   ECT(0)   ECN-Capable Transport 
   10   ECT(1)   ECN-Capable Transport 
   11   CE       Congestion Experienced

**** Notes on interpreting the ToS byte

The two definitions are complimentary for the upper 3-bits. This is good, since those three bits are often copied to/from the 3-bit class-of-service (CoS) field of layer-2 802.1p frames and the 3-bit experimental (EXP) field of MPLS frames. Bits 3-5, however, are fairly incompatible..

Thus, it's important not to oversimplify precedence/DSCP as a simple pecking order. In reality, each unique precedence/DSCP value conveys a packet's requirements for throughput, latency, and packet loss, three traits that are somewhat at odds with each other. And, any value can be assigned to any organizationally-unique purposes. For example,

  • Packets with precedence 5 and/or DSCP EF are often serviced by priority queues, so they may delay packets with higher precedence/DSCP values.
  • Within each AF level (e.g., AF2x includes AF21, AF22, and AF23), the higher values indicate a higher tolerance to packet loss. I.e., a congested interface should drop AF22 packets earlier than AF21 packets. In Cisco IOS, this behavior is implemented with DiffServ-complaint WRED ('random-detect dscp' on a class-map).
  • Any DSCP value under CS6 can be assigned for any organizationally-unique use. For example, Precedence 1/DSCP CS1 is often assigned for use as a less-than-best-effort class called scavenger. To successfully implement the scavenger class, all network devices must agree to treat CS1 traffic worse than Precedence 0/DSCP default.

ECN (RFC 3168) is an emerging issue for traditional netflow collection and processing. The jist of ECN is that an intermediary router can, after sensing congestion, change the lower two bits of ToS to indicate congestion so that the hosts can slow themselves down. It's a L3 implementation of frame-relay FECN, essentially. Unfortunately, since the ToS field changes packet-to-packet and hop-to-hop, it also disrupts the traditional netflow 7-tuplet key (protocol, src/dst IP, src/dst port, ToS, input interface).

If you can, exclude ToS as a flow key on your netflow sources. Recent cisco IOS versions let you do this with flexible netflow while still exporting netflow v5.

[/Networks] permanent link


2009 Oct 26 - Mon

Machine Readable News and Algorithmic Trading

A-Team Research has released a special report called: Machine Readable News and Algorithmic Trading.

I've writing some code to accept a news release feed from DTNIQ/IQFeed. This report comes in handy for supplying some ideas on how to analyze and make use of the news feed. Here are some examples:

  • When generating trading signals for high frequency traders and other alpha-seekers, it can be used to build sentiment measurement applications, stock screening applications and back-testing systems for trading algorithms.
  • It can be used in support of market surveillance systems.
  • This translates into simple stock-screening applications for individual securities or lists of stocks.
  • It can mean the analysis of macroeconomic data to identify trends, correlations and other relationships.
  • It can involve scanning key parameters to measure market sentiment.
  • It could predict potentially volatile trading days, indicating which stocks or types of stock may be most affected.
  • It can also be used to quickly derive directional signals from the marketplace, and set in play appropriate trading algorithms.

[/Trading/AutomatedTrading] permanent link


Bottom Line on Security in Windows 7, and Some Thoughts on MultiTouch

From SANS NewsBites vol. 11 Num. 84, 2009-10-23, NewsBites editorial board member John Pescatore says:

From a security perspective, Windows 7 offers definite improvements over Windows XP, but there is no major security reason to move to Windows 7 before it makes business sense. The biggest improvement in Windows desktop security comes from getting off of the IE6 browser and moving to IE8 or the latest version of Firefox - and you don't need Windows 7 to do that.

I've read that Windows 7 is somewhat faster but is better than Windows Vista. I havn't seen definitive reviews that Windows 7 is faster than Windows XP, or offers anything useful over and above what Windows XP offers as a development or user platform.

Well actually, I understand that Windows 7 has a multi-touch API built-in for when multi-touch devices become more ubiquitous. 10/GUI is one such interesting multi-touch method of CHI (Computer Human Interaction).

reacTIVision is an existing tangible multi-touch interaction framework. I've always thought that using a multi-touch interface with a DMX controlled lighting system would make for some very intersting busking capabilities for live concerts.

Anyway back to Windows 7, the EE Times Newsletter roving editor Rick Merritt asserts:

That all Microsoft has done with Windows 7 is not mess it up. "Imagine the response systems makers might have if Microsoft had actually enabled some cool new ideas," Merritt writes. "Call me a curmudgeon, but I think Microsoft is resting on its monopolistic backside." What was needed from Microsoft, of course, was an OS that advanced the state of the art. This is not the time for tech companies to play it safe, especially a company with pockets as deep as Microsoft's.

On the other hand, if I took the time out to evaluate real life workflows in the new Windows 7 environment, and the execution time differentials was minimal, I'd migrate just to stay with the latest thing. Some of the workflows I'd have to check would be:

  • Editing video with Adobe Premiere Pro CS4: lots of drive activity and lots of multimedia interaction
  • Compiling heavily templated Boost supported C++ programs in Visual Studio: lots of CPU and some drive activity
  • Compiling heavily templated Boost supported C++ programs in a an KDE/Eclipse/GCC environment hosted in a VMWare Workstation environment: lots of cross operating system calls
  • Running trading and news gathering applications with intensive cross thread messaging: cpu and network intensive

Can anyone offer up opinions on what they've encountered between Windows XP, Windows Vista, and Windows 7 in these various workflow environments from a speed/stability/effectiveness point of view?

Shortly after having written this, I saw an article published at Ars Technica which had a rather lengthy review regarding XP, Vista, and Windows 7 entitled Hasta la Vista, baby: Ars reviews Windows 7. Buried further back in the article makes reference to the fact that performance isn't much different among the three. The article does mention multi-touch, and indicates that it isn't very well integrated into the supplied applications.

Once I get some time, it looks like an upgrade to Windows 7 might be worth examining.

[/Personal/Technology] permanent link


2009 Oct 17 - Sat

Trader Urgency Indicator

In the LinkedIn Group Automated Trading Strategies, Alpesh Patel posted a Trader Urgency Indicator:

  • Fix no. of ticks based on market traded( Let's say 100 ticks chart for S&P 500 Emini)
  • Monitor the time taken to finish the bar
  • You will notice that most successful breakout bars will finish in significatly less time showing trader urgency
  • At trend exhaustion you will notice significantly more time taken to finish the bar

I think I wrote about Range Bars at one point in time. This is probably a variation on that theme.

[/Trading/TechnicalAnalysis] permanent link


Memory Leak Detection in MSVC 2008 C++

In Visual Studio, when building debug releases, I seem to recall that memory leak detection was automatically enabled. In Visual Studio 2008, memory leak detection is not automatically enabled. Code will need to be added to the source files to make it available.

Memory Leak Detection Enabling is a document in MSDN describing how to enable the ability. Basically, to enable the debug heap functions, include the following statements:

#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>

Immediately before the program exits, include the following function call:

_CrtDumpMemoryLeaks();

One commenter indicates that this only works best with C code, ie, C code will get discriptive comments, but C++ code will get cryptic memory reports.

In order to get the file and line number to work you need to manually redefine new in your code. This is done by undefining new, and redefining it to point to the debug versions that take a file and line number.

During debugging, in Windows based applications, std::cout no longer sends text into the IDE's Output window. Instead, the function OutputDebugString( "..." ) needs to be used.

[/Personal/SoftwareDevelopment/CPP] permanent link


2009 Oct 14 - Wed

Boost BJam Updated

With the version 1.40 of Boost, library names are decorated differently. To keep the old style library decorations and naming style, the option "--layout=tagged" should work. So from my 2008/10/10 Boost Build Article, my typical command line should be:

bjam --layout=versioned --toolset=msvc-9.0 variant=debug threading=multi link=static runtime-link=static stage

[/OpenSource/Programming] permanent link


2009 Oct 11 - Sun

Traceroute Methods

Traceroute, in a nutshell, is about iteratively sending packets to the network with specific TTL (Time To Live) settings. The first round of packets uses a TTL of 1. The second uses a value of 2. The values are adjusted upwards each iteration until the destination responds, or the maximum number of hops has been evaluated.

The traditional form of pinging is to send out an ICMP type 8 packet. There are other forms:

  • Windows 'tracert' uses ICMP type 8 with incrementing TTL
  • Unix 'traceroute' uses UDP packets starting port 33434 through (33434 + - 1)
  • TcpTraceRoute which uses TCP syn packets to penetrate firewalls and NAT systems

[/Networks] permanent link


2009 Oct 01 - Thu

Determining your Dominant Eye

When I was shooting video a few days ago, a couple of questions were going through my head. One question was that of wondering if one should keep both eyes open when viewing through the view finder. If one is to keep both eyes open, the second question to arise was: which eye to use? Is there a difference? Ok, three questions.

A brief search indicates that, in the world of archery at least, there is indeed a dominant eye. There is even a page for determining your dominant eye:

  • Extend both hands forward of your body and place the hands together making a small triangle (approximately 1/2 to 3/4 inch per side) between your thumbs and the first knuckle.
  • With both eyes open, look through the triangle and center something such as a doorknob or the bullseye of a target in the triangle.
  • Close your left eye. If the object remains in view, you are right eye dominant. If your hands appear to move off the object and move to the left, then you are left eye dominant.
  • To validate the first test, look through the triangle and center the object again with both eyes open.
  • Close your right eye. If the object remains in view, you are left eye dominant. If your hands appear to move off the object and move to the right, then you are right eye dominant.
  • One more alternative method is to assume the same position with your hands forming the triangle around the object and have both eyes open. Now, slowly bring your hands toward your face while continuing to look at the object with both eyes open. When your hands touch your face, the triangle opening should be in front of your dominant eye.

I'm guessing that one uses their dominant eye when using the view finder. I havn't answered the question about whether having both eyes open is good or not. Having at least one open seems like a good idea, though.

[/Personal] permanent link


Building Boost 1.40.0 on Debian Linux

Boost builds well on Linux. To get a clean build, I needed two libraries. With Python already installed, I needed to 'apt-get install python-dev'. The iostreams library needed the bzip2 libraries which can be installed through 'apt-get install libbz2-dev'.

After downloading bjam from sourcforge, my build then used:

bjam install --toolset=gcc --prefix=/usr/local --layout=tagged variant=debug threading=multi link=static

Instead of 'debug', 'release' can be used.

[/Personal/SoftwareDevelopment/CPP] permanent link


2009 Sep 29 - Tue

VMWare Unity

In the latest release of VMWare Workstation, it has a new feature called Unity Mode. It is useable with Linux and Windows 2000 and later guest operating systems. Unity Mode happens when clicking a button in VMWare to "display applications directly on the host desktop".

The help file goes on to say:

The virtual machine console view is hidden, and you can minimize the Workstation window.

You can use keyboard shortcuts to copy, cut, and paste text between applications on your host machine and virtual machine applications displayed in Unity mode. You can also drag and drop and copy and paste files between host and guest.

The Ctrl+Shift+V key combination will pop up the virtual machine's Start or Applications Menu.

[/Personal/Technology] permanent link


Upgrade to KDE4: Black Screen, Obsidian Cursor

Today when upgrading my Debian Lenny/KDE to the latest version, I started having problems with KDE.

On my first upgrade, I did a simple 'apt-get update', 'apt-get upgrade' sequence. A bunch of packages were held back. The end result was that I could log in to KDE, and could see a desktop, but I had no menu interface.

Considering that there were a bunch of packages being help back, I did a 'apt-get update', 'apt-get dist-upgrade' sequence. Upon logging into the KDE shell, all I saw was a black screen and a shiny obsidian cursor.

It looks like the transition from KDE 3.5 to KDE 4.0 is not seamless in this Debian (Lenny) point release. However, that isn't quite correct. In my /etc/apt/sources.list file I do have entries for testing and experimental. So..., I may now be downloading testing or experimental releases.

In any case, the resolution to the problem appears to be to drop into the console and run one of these three commands: 'apt-get install kde-standard', 'apt-get install kde-minimal', or 'apt-get install kde-full'.

[/OpenSource/Debian] permanent link


2009 Sep 22 - Tue

Updating WebGUI

WebGUI's Update Page has links to the various updates.

Upgrade information can be found at Upgrading WebGUI.

To view the current upgrade history:

cd /data/WebGUI/sbin
perl upgrade.pl --history --doit
perl testEnvironment.pl

Stop Spectre:

cd /data/WebGUI/sbin
perl spectre.pl --shutdown

Make a backup of the files in /data/WebGUI/etc. The originals will be over-written, but the customized ones should be ok after the upgrade.

Decompress the new archive over the old files (with the current version as of this writing):

cd /data
wget http://update.webgui.org/7.x.x/webgui-7.7.20-stable.tar.gz
tar -zxvf  webgui-7.7.20-stable.tar.gz

Read the WebGUI/docs/gotcha.txt file.

Read the WebGUI/docs/changelog/7.x.x.txt to check out the latest changes.

Restart apache with '/etc/init.d/apache2 restart'.

Run the upgrade:

cd /data/WebGUI/sbin
perl upgrade.pl
perl upgrade.pl --doit --backupDir /data/bu/wg

Run testEnvironment.pl:

cd /data/WebGUI/sbin
perl testEnvironment.pl

Start Spectre:

cd /data/WebGUI/sbin
perl spectre.pl --daemon

Restart apache with '/etc/init.d/apache2 restart'.

[/OpenSource/Debian] permanent link


2009 Sep 15 - Tue

Search Engine Optimization

In my Sept 10 article regarding my experiences with the Elation DS 575E fixture, I used the phrase 'Elation Lighting Design Spot 575E' in many different places.

It took a day or two, but if you search for that phrase, you'll find it with a page rank of 2, only second to the Elation Lighting web site itself.

Search engine optimization is what they say it is. Embedding a series of keywords in an article multiple times does indeed help boost an article's popularity in Google's Page Rank. The problem is... finding the correct search terms.

The whole phrase got me that second spot, but if the words are re-arranged or some not used, then the ranking drops dramatically. If I had used various combinations, the Page Rank would probably come out quite different.... maybe lower, but high enough in a larger number of word search combinations.

[/Personal/Business] permanent link


Elation DS 575E Quality, Follow Up

The morning after I wrote my September 10 article regarding the Elation DS 575E Fixture, I heard back from the Service Department of Elation. Maybe it was coincidental, but I'd like to think that that blog article, in addition to messages I posted on Elation Lighting Forums, ControlBooth, and The Light Network helped in getting the ball rolling for servicing my Elation DS 575E fixture.

I'll be getting the parts I need to bring the Elation DS 575E back into spec, and Elation will be providing phone support to get me through the tough bits.

In my article, I did rant a bit about the Elation DS 575E service manual. I had been expecting something with some descriptions maintenance and service descriptions in them. In the end, with the break-out drawings and the parts lists, there is enough information to identify replacement parts for the Elation DS 575E. The instructions for replacement will be via phone. Replacing a tilt belt is going to be fun.

As I previously mentioned, I think the Elation DS 575E fixtures are great. It was getting the service arrangements straightened out was the tough part, mostly due to the fact that I'm two and a half hours and two customs departments from the nearest service center. I hadn't expected to have to service the Elation DS 575E lights so soon.

[/Personal/Lighting] permanent link


2009 Sep 10 - Thu

Elation Lighting Design Spot 575E Quality, or Lack Thereof

A few months ago, I purchased a couple of Design Spot 575E Moving Lights from Elation Lighting. After a bunch of research into feature sets and prices, these seemed to have the best bang for the buck. I purchased them through Bill Cronheim at Entertainment Systems. Bill had said they were reliable and free of worries and worked well.

When I received them, they appeared to work well. After I used them for a few hours, and got used to their capabilities, I realized that the two lights didn't match each other. One Design Spot 575E was having more problems than the other Design Spot 575E.

The first problem I noticed was that the focus motor wouldn't focus. After obtaining a copy of the service manual (what a joke that is), and some poking around, I re-adjusted the focus sensor, which is basically a magnetic sensor on a small circuit board. The sensor board doesn't appear to be long enough. After looking at the parts list, they have a B version of it. So perhaps there is a problem with it.

When using the Elation Lighting Design Spot 575E lights together during a focus session, I noticed the colour saturation on one was not as good as the other Design Spot 575E. Wouldn't you know it, the problem Design Spot 575E was the same as the one that had the sensor board problem.

I let Elation Lighting know about the problem.

I let it go for the time being and continued on with the show preparation. During a lamp check prior to a rehearsal, I noticed that the lamp would only point up at the ceiling. The belt appeared fine but lose. It tried to tighten the belt. There was no more ability to take up slack. Upon further observation, I found that the belt was splitting. As it happens, the broken belt was on that same Elation Lighting Design Spot 575E. Can you say LEMON?

I had the two Elation Lighting Design Spot 575E moving lights shipped out here to Bermuda. Do you know the customs and shipping expeneses I went through?

I offered Elation Lighting some token fee to pay for the LEMON Elation Lighting Design Spot 575E, made a request for replacement belts, CMY module, magnetic sensor board, requested a replacment NEW unit, and said I would buy another one in addition, because I think that they have some value.

It has now been several weeks now, and very little productive response from Gines Gines (Service Manager) and Eric Loader (Sales Manager) at Elation Lighting. I have tried to be friendly and open with them, but they don't seem to want to offer up any solutions.

I guess if I don't buy ten's or hundred's of lights from them, it doesn't matter much that I'm not happy with their after sales service or support. They can always try and sell lights to someone else.

Don't get me wrong, I love the fixture, but if they can't support it, well, I am no longer a supporter of Elation Lighting and their Elation Lighting Design Spot 575E Moving Head Fixture. I think what I received was not a new unit, as I expected, and had ordered, but a B stock demo unit. If they would just own up to that fact and get me my new replacement fixture, I'd be happy as a clam in wet sand.

[/Personal/Lighting] permanent link


2009 Sep 07 - Mon

Converting MIBS to OIDS

From a Cisco perspective, on the Cisco-NSP mailing list, Lee provided a simple method to convert between a MIB and an OID.

First obtain the oid files from Cisco's web site: ftp://ftp.cisco.com/pub/mibs/oid/oid.tar.gz. Expand the file and extract the included files. Then:

cat * | sort -k 2,2 -k 1 | uniq | nawk '{printf("%-50s  %s\n", $1,$2) }' > ../oids_all.txt

If you want to use it on Windows, use

unix2dos ../oids_all.txt

snmptranslate from net-snmp.sourceforge.net/ does a similar job.

One can also browse them at Cisco's OID Browser.

[/Networks] permanent link


2009 Aug 31 - Mon

Massaging CommunigatePro MIB For Cricket

CommuniGate Pro's web interface has a page which shows SNMP originated statistics. On that same page, there is link for downloading the MIB file which defines the values shown on that page.

Rather than going through all 100 or so MIB entries, I wrote an AWK script to process the CommuniGate Pro MIB file into a Defaults file which can read by Cricket, the SNMP collector/grapher.

After running the Defaults file with a CommuniGate Pro server for a while, I found that some of the groupings didn't work very well by default. Several values are serveral orders of magnitude different from other values in the same group. I did some manually editing to get values of like magnitude into their own groups. Here is the resulting Defaults.communigate file. I've left colouring to the Cricket defaults, but at least it gets the values into my monitoring solution.

[/OpenSource/Tools] permanent link


2009 Aug 29 - Sat

Tools for Testing Your Internet Connection

Measurement Lab has a series of tools for Testing Your Internet Connection:

  • Network Diagnostic Tool: Test your connection speed and receive sophisticated diagnosis of problems limiting speed
  • Glasnost: Test whether BitTorrent is being blocked or throttled
  • Network Path and Application Diagnosis: Diagnose common problems that impact last-mile broadband networks
  • Pathload2: Test your available bandwidth
  • Diffprobe: Determine whether an ISP is giving some traffic a lower priority than other traffic
  • NANO: Determine whether an ISP is degrading the performance of a certain subset of users, applications, or destinations

[/Networks] permanent link


Virtually Wondering the Earth

I saw a video once of how collections of pictures can be data mined to produce composite interactions which are more than the sum of the parts. For example, it is said that there are more than 80,000 images of the Notre Dame Cathedral in the Flickr database. By using Scene Reconstruction and Visualization from Community Photo Collections, one can see more detail than any one of the photographers who took pictures when actually being on site.

That research is only a minor portion of what can be found at Microsoft Research.

[/Personal/Technology] permanent link


VOIP Security Tools

The Voice over IP Security Alliance has an interesting collection of VoIP Security Tool List which includes things like:

  • VoIP Sniffing Tools
  • VoIP Scanning and Enumeration Tools
  • VoIP Packet Creation and Flooding Tools
  • VoIP Fuzzing Tools
  • VoIP Signaling Manipulation Tools
  • VoIP Media Manipulation Tools
  • Miscellaneous Tools
  • Tool Tutorials and Presentations

Use at your own risk.

[/Networks] permanent link


Options as Indicators

Optionetics has an interesting article called Using Options to Predict Stock Prices. The author, John Jeffery, writes that, in addition to the usual fundamental analysis and technical analysis methods of 'stock direction prediction', options can help indicate trade direction. Three useful indications include:

Put/Call Ratios: The most popular Put/Call Ratio is the one used for monitoring the sum total of option trades at the CBOE (Chicago Board of Trade). The same technique can be used for individual stocks as well. It is basically the ratio of the number of open call positions relative to the number of open put positions on a given stock at a given expiry. With experience, it can be used as a bullish/bearish indicator of the underlying stock. Street Authority has some further information on the Put/Call Ratio. Schaeffer's Investment Research indicates the general market is strongly bullish. They have a series of stock screeners.

    Bullish Stock Screeners
  • Stocks with a high put/call ratio
  • Stocks with high short interest
    Bearish Stock Screeners
  • Stocks with a low put/call ratio
  • Stocks with low short interest

A reference is made to an article by Pan and Poteshman called The Information of Option Volume for Future Stock Prices where they say that they "performed daily cross sectional analysis on 10 years of CBOE data to reveal that doing nothing more than buying stocks with low put/call ratios and selling stocks with high put/call ratios generated a return of 1% per week." You have to read the full abstract for some caveates though:

We present strong evidence that option trading volume contains information about future stock price movements. Taking advantage of a unique dataset from the Chicago Board Options Exchange, we construct put-call ratios from option volume initiated by buyers to open new positions. On a risk-adjusted basis, stocks with low put-call ratios outperform stocks with high put-call ratios by more than 40 basis points on the next day and more than 1% over the next week. Partitioning our option signals into components that are publicly and non-publicly observable, we find that the economic source of this predictability is non-public information possessed by option traders rather than market inefficiency. We also find greater predictability from option signals for stocks with higher concentrations of informed traders and from option contracts with greater leverage.

One of the authors has another paper entitled Investor Behavior in the Option Market. One of the interesting points from the abstract is the remark "none of the investor groups significantly increased their purchases of puts during the bubble period in order to overcome short sales constraints in the stock market." Taken the other way around, puts are an easy method of getting around short selling restrictions on equities.

In visiting a related author, there is a recent paper called Dynamic Trading with Predictable Returns and Transaction Costs which discusses some portfolio optimization with a mixture of short, medium, and long term mean reversion based trades. This has nothing to do with options, but is an interesting article in itself which I wanted to keep.

Implied Volatility: Implied Volatility is the expected volatility of an option's underlying asset up to the option expiry. This tool can be used for inter-day and intra-day trading calculations. In the article's example, where the underlying is moving sideways, an increasing Implied Volatility could indicate some major move in the underlying. Various other relationships can be established as well.

Option Volumes: When looking at option volumes across the whole series of an underlying's strike prices and expires, look for unusual activity in volume. This means comparing current traded volume with average daily traded volume. It may be possible to see where traders are seeing resistance or support levels, with the interpretation being whether the volume is in puts or calls. If there are no related news, earnings events, or government announcements, then someone may know something.

[/Trading/Options] permanent link


2009 Aug 19 - Wed

Debian Dpkg Install

From the Debian Security Announce List, a little short-cut for installing .deb packages:

wget url
        will fetch the file for you
dpkg -i file.deb
        will install the referenced file.

[/OpenSource/Debian] permanent link


IPTables Mangle DSCP

From the Nanog mailing list, a way to force QOS packet marking on outbound packets might look like:

# iptables -t mangle -I OUTPUT -p tcp --sport 80  -j DSCP --set-dscp 0x1a

[/OpenSource/Linux] permanent link


Boost Bind/Lambda replaced by Boost Spirit/Phoenix

Regular users of the C++ library known as Boost will already know about functors, lambda functions, and the like. These abilities mostly originate in the Boost.Bind and Boost.Lambda libraries.

As I'll soon be using the functor capability within my C++ programs, I wanted to make a 'note-to-self' regarding the fact that Boost.Bind and Boost.Lambda have basically been superceded by Boost.Spirit.Phoenix.

The Phoenix library has been accepted into Boost based upon Hartmut's summary of the Phoenix review.

The current incarnation of the Boost libraries is 1.39. Here is the Spirit User's Guide which includes a link to the Phoenix Documentation and a link to the Phoenix Users Guide.

It is noted that FC++ influenced Phoenix, and when looking at the FC++ web site, there is a reference to LC++, which is a Logic Programming language built atop of FC++. I wonder if something similar has been done atop of Phoenix.

An example from the Boost Mailing List of using Phoenix:

#include <vector>
#include <algorithm>
#include <boost/shared_ptr.hpp>
#include <boost/spirit/home/phoenix/core.hpp>

#include <boost/spirit/home/phoenix/operator.hpp>
#include <boost/spirit/home/phoenix/bind/bind_function.hpp>

struct A {};

void foo( const A& ) {}

int main()
{
   using namespace boost::phoenix;
   using namespace boost::phoenix::arg_names;

   std::vector< boost::shared_ptr< A > > vec;

   vec.push_back( boost::shared_ptr< A >( new A ) );

   std::for_each( vec.begin(), vec.end(),

                  bind( &foo, *arg1 ) );
   return 0;

}

Colorized with CodeColorizer.

[/Personal/SoftwareDevelopment/CPP] permanent link


Simple Desktop Network Monitoring Tools

Here are a few simple tools useful on a Windows' desktop for monitoring basic network stuff:

  • TCPTraceRoute: mBy sending out TCP SYN packets instead of UDP or ICMP ECHO packets, tcptraceroute is able to bypass the most common firewall filters.
  • LFT: Layer Four Traceroute which is mostly non Windows based tool, but partially works in Cygwin.
  • Ping Plotter: helps you pinpoint where the problems are in an intuitive graphical way, and to continue monitoring your connection long-term to further identify issues.
  • SNMP Traffic Grapher: monitor a couple of SNMP values in near real time.
  • WinMTR: WinMTR is a windows clone of popular Matt's traceroute/ping program called MTR.

[/Networks] permanent link


2009 Aug 18 - Tue

Model View Controller

In the world of software development, programmers will make use of 'patterns' as a form of programming template.

By defining and segregating certain forms of functionality, code reuse and modularity can be enhanced. One regularily recurring form of functionality involves some form of data collection (a model), some form of view of the data (a view), and some form of interaction with the view and data (a controller). This collection has been defined as a the Model-View-Controller (MVC) pattern.

The best diagram I've found which depicts the relationships between these three sets of functions can be found at Designing Enterprise Applicationswith the J2EETM Platform, Second Edition. I've included the diagram here:

[/Personal/SoftwareDevelopment] permanent link


Reality Sets In

The blog Bits or Pieces? has an interesting reference to the three stages of expertise:

It starts with someone making the realization that they want to learn about something, and don't know much about it. They may turn into a wanna-be know-it-all, where they think they know-it-all. But for the trully intelligent, reality sets in, and the realization is made that one knows a lot about the subject matter, but there is so much to learn. For me, the hard part is that the more I know, the more I know I don't know. The curve seems to indicate that it only gets worse, not better.

[/Personal] permanent link


2009 Aug 02 - Sun

Building HDF5 in Microsoft Visual Studio 2008

A while ago, I wrote an article about HDF Group's Hierarchical Data Format (HDF5) Library . In the article, there were some brief installation instructions. This article adds some refinements to the installation instructions.

You need to start by downloading the compression libraries:

Create a sub-directory called 'compress' somewhere. In that sub-directory, create two additional sub-directories: 'include' and 'lib'.

Unzip the two downloads. From each of the two uncompressed libraries, put all the .lib and .dll files into the sub-directory .\compress\lib, and put all the .h files into the .\compress\include sub-directory.

In Windows, create two environment variables by going: Start->ControlPanel->System->Advanced->EnvironmentVariables, and then create two new user variables:

  • HDF5_EXT_SZIP = szlibdll.lib
  • HDF5_EXT_ZLIB = zlib1.lib

The remaining build instructions focus on building the useful HDF5 C++ libraries for HDF5 v1.9, with v1.9.43 being the latest as of this writing. Download /hdf5-1.9.43.tar.gz and expand it with 7-Zip into a working sub-directory called hdf5-1.9.43. Run .\hdf5-1.9.43\windows\copy_hdf.bat. Double click on .\hdf5-1.9.43\windows\proj\all\all.sln to open the Visual Studio Solution. The file is in version 2005. VS 2008 will ask to convert it for you. You'll need to do so.

After the conversion, go into Tools->Options->ProjectsAndSolutions->VC++Directories. Set 'Include Files' to the full path of your .\compress\include sub-directory, and set 'Library Files' to the full path of your .\compress\lib sub-directory.

For the project properties, choose whether you are doing a debug build or a release build. Do the build.

For v1.9.43, I found that there was one debug executable that wouldn't build, which, since I'm only interested in some key libraries, has no effect on my required outcome.

After the build process is complete, open a command prompt in .\hdf5-1.9.43, and run 'installhdf5lib.bat'. The various .dll, .lib, and .h files will be in \dll, \lib, and \include off of .\hdf5-1.9.43\hdf5lib.

From Sysinternals, download junction.exe. This allows you to create symbolic links between directories. Put the program somewhere in your path. Then use it to create a symbolic link from your existing project to the hdf5lib directory. This will allow you to change library versions with a simple symbolic link change. For example, something like the following will set a link to the include files where ever you installed and built the hdf5 libraries.

  • junction hdf5 .\hdf5-1.9.43\hdf5lib

[/Personal/SoftwareDevelopment/CPP] permanent link


2009 Jul 31 - Fri

Installing OpenLDAP on Debian Lenny

Here are a few basic apt-get commands for the OpenLDAP installation. I have to look into how TLS is actually implemented and configured.

apt-get install libsasl2-2 libgnutl26
apt-get install ldap-utils libsasl2-modules-ldap 
apt-get install  slapd libldap-2.4-2

[/OpenSource/Debian] permanent link


Installing Asterisk 1.6.2.0 beta3 on Debian Lenny 5.0.2

Debian package manager has the Asterisk v1.4 flavour as a package, but I wanted the latest to try out. Here is the work flow to get the basics in place:

Here are some pre-requisites to install. I havn't figured out the 'lua' bit yet:

apt-get install build-essential
apt-get install openssl
apt-get install libssl-dev
apt-get install libldap2-dev
apt-get install libncurses5-dev
apt-get install festival-dev festival
apt-get install curl libcurl4-openssl-dev
apt-get install lua5.1
apt-get install uw-mailutils
apt-get install libgsm1
apt-get install libiksemel3
apt-get install libogg0
apt-get install libspeex1 libspeexdsp1
apt-get install libtonezone1
apt-get install libvorbis0a libvorbisenc2
apt-get install doxygen
apt-get install postgresql-server-dev-8.3 postgresql-client-8.3
apt-get install libnewt-dev
apt-get install linux-headers-2.6.26-2-686
apt-get install libogg-dev
apt-get install libvorbis-dev
apt-get install liblua5.1-posix-dev
apt-get install libgsm1-dev

The basic hardware layer for the kernel is next. This includes dummy timers for systems without additional telephony hardware.

d /usr/src
wget http://downloads.asterisk.org/pub/telephony/dahdi-linux/dahdi-linux-2.2.0.2.tar.gz
tar -zxvf dahdi-linux-2.2.0.2.tar.gz
cd dahdi-linux-2.2.0.2
make 
make install

User space Dahdi tools are then built:

d /usr/src
wget http://downloads.asterisk.org/pub/telephony/dahdi-tools/dahdi-tools-2.2.0.tar.gz
tar -zxvf dahdi-tools-2.2.0.tar.gz
cd dahdi-tools-2.2.0
./configure  \
   --sysconfdir=/etc/ \
    --libdir=/usr/lib \
   --localstatedir=/var/local \
   --datarootdir=/usr/share \
   --includedir=/usr/include 
make menuselect
make
make install
make config

This portion installs a recent beta releaes of the Asterisk engine:

cd /usr/src
wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-1.6.2.0-beta3.tar.gz
tar -zxvf asterisk-1.6.2.0-beta3.tar.gz
cd asterisk-1.6.2.0-beta3
./configure  \
   --sysconfdir=/etc/ \
    --libdir=/usr/lib \
   --localstatedir=/var/local \
   --datarootdir=/usr/share \
   --includedir=/usr/include \
   --disable-xmldoc

Ensure you've got all the various libraries, modules, bits and pieces attached:

make menuselect

If you are installing a system from scratch, the run all these. If you already have configuration files, skip the 'make samples'.

make
make install
make samples
make progdocs

If you are using PostgreSQL, build the database tables with:

su - postgres
psql template1
> create database asterisk;
> quite;
psql asterisk < /usr/src/asterisk-1.6.2.0-beta3/contrib/scripts/realtime_pgsql.sql

Then edit /etc/asterisk/res_pgsql.conf to add connection information. Other files you may need to edit include:

sip.conf
dahdi-channels.conf
cdr_manager.conf
cdr_pgsql.conf
cdr.conf
extensions.conf
iax.conf

Get things started with:

/etc/init.d/dahdi start
safe_asterisk

[/OpenSource/Debian/Asterisk] permanent link


2009 Jul 29 - Wed

A Singleton Per Thread

A while ago, I had written about singletons, and how there isn't something straight-forward in Boost. Recently, I've seen references to a couple of interesting messages regarding not only singletons, but how to get a singleton per thread.

One starts by considering Boost Thread Local Storage and how to use it.

Then one can consider the concept of a thread-safe lazy singleton template class from the Boost Cookbook, which a singleton implementation not referenced in my other article.

Rutger ter Borg suggested the following untested possible code snippet:

template< typename Singleton >
Singleton& get_singleton() {
  static boost::thread_specific_ptr< Singleton > m_singleton;
  if ( !m_singleton.get() ) {
    m_singleton.reset( new Singleton() );
  }
  return *m_singleton;
}

[/Personal/SoftwareDevelopment/CPP] permanent link


2009 Jul 24 - Fri

Debian Lenny with Sendmail, Dovecot, MailScanner, SpamAssassin: Part 6

I've spent the last articles writing about getting an open source email server up and running. So far so good. My email logs show that a tremendous amount of spam is being blocked. One begins to wonder if there any real email remaining any more.

During the building of this server, a number of web sites provided useful information for troubleshooting and for configuration. I'm listing them here for reference before I close them out.

In some follow-up, I came across MailWatch, which is a web-based front-end to MailScanner written in PHP, MySQL and JpGraph and is available for free under the terms of the GNU Public License.

[/OpenSource/Debian/email] permanent link


2009 Jul 19 - Sun

Debian Lenny with Sendmail, Dovecot, MailScanner, SpamAssassin: Part 5

A couple of articles ago, I started with a DoveCot Installation. I managed to download, build, and get a rough installation. I also prepared a userid for the service. It was at that point in the Dovecot installation instructions where they started talking about certificates, and I side-tracked into Certificate Authorities and certificate installation.

In /etc/dovecot, I copied dovecot-example.conf to dovecot.conf. In dovecot.conf, I updated the following lines to get things started:

protocols = imap imaps
disable_plaintext_auth = no
ssl = no
mail_location = maildir:~/Maildir
#mail_location = maildir:/%h/Maildir
auth_debug_passwords = yes

Dovecot Wiki does a good job of explaining the installation process. In fact, the non-ssl installation process is quite painless, and consists mostly of testing the connection.

Once the basic configuration is tested, then enable the configuration for ssl, and restart Dovecot.

disable_plaintext_auth = yes
ssl = yes
auth_debug_passwords = no
# Same keys from the sendmail installation
ssl_cert_file = /etc/ssl/private/mail.example.com.crt
ssl_key_file = /etc/ssl/private/mail.example.com.key

Startup an IMAP session with a Mail Client and try IMAP and IMAPS. Try sending email as well through the SMTP Sendmail connection with encryption. Tcpdump can be used to look at packets.

There is a Sample Dovecot init.d script which can be used to start, stop, and reload the service. The sample can be pasted verbatim into /etc/init.d/dovecot. Also do a 'chmod 755 /etc/init.d/dovecot'. Then '/etc/init.d/dovecot start'.

With a successful send and receive of email, that wraps up the rather lengthy configuration of a reasonably protected email solution encompassing Sendmail as an email transport mechanism, Dovecot as an IMAP/IMAPS service, and MailScanner with SpamAssassin/F-Prot for email scanning and protection.

[/OpenSource/Debian/email] permanent link


Debian Lenny with Sendmail, Dovecot, MailScanner, SpamAssassin: Part 4

It has taken a series of articles to get Sendmail installed and working with authentication, inline encryption, and some inline DNSBL capabilities. In this article, I'll see if I can get MailScanner, SpamAssassin and a virus scanner up and running with Sendmail.

Before starting into that though, I have a couple of links to other sites which have good information for tuning the sendmail.mc file:

Back to the install. Starting with SpamAssassin, which looks like the last version is 3.2.5 from June of 2008, which is a Perl based utility, it can be downloaded from CPAN by starting the command line with 'perl -MCPAN -eshell':

install Bundle::CPAN
install Term::ReadLine
install MIME::QuotedPrint
install YAML
install YAML::Syck
install MIME::Base64
install Time::HiRes
install Digest::SHA1
install Net::DNS
install Mail::SPF
install IP::Country
install Net::Ident
install Mail::DomainKeys
install Mail::DKIM
install DBI
install LWP::UserAgent
install HTTP::Date
install Encode::Detect
install Mail::SpamAssassin

The pre-requisites build nicely, but the main Mail::SpamAssassin unit does not test well because it tries to start a daemon, which doesn't appear to do so. To find the reason will take some digging, but in the meantime, a force install may or may not be required. It probably is irrelevant anyway as MailScanner does not use spamd.

For a virus scanner, I've used f-prot in the past, and I'll try it again for this install. Others have used ClamAV, and I may add it as a secondary scanner. (Note, the file downloaded is a 64bit version). The last bit of the install script will ask if the daemon should be installed in crontab.... select no as MailScanner will it start it manually. Nor should Sendmail be configured to run the scanner.

cd /usr/src/
wget http://files.f-prot.com/files/unix-trial/fp-Linux-x86_64-ws.tar.gz
cd /opt
tar -zxvf /usr/src/fp-Linux-x86_64-ws.tar.gz
cd f-prot
./install-f-prot.pl
fpscan /etc/passwd

Create a test file and put the EICAR virus into it. Run 'fpscan test' to ensure it finds the virus.

For MailScanner, the following Perl modules are required:

install Sys::Syslog
install Net::CIDR
install IO::Stringy
install Mail::Util
install File::Spec
install HTML::Tagset
install HTML::Parser
install MIME::Tools
install File::Temp
install Convert::TNEF
install Compress::Zlib
install Archive::Zip
install Check::ISA

Next steps:

cd /usr/src
wget http://www.mailscanner.info/files/4/tar/MailScanner-install-4.77.10-1.tar.gz
tar -zxvf MailScanner-install-4.77.10-1.tar.gz
cd MailScanner-install-4.77.10
./install.sh

A few changes, like the domain name, may need to be changed in the /opt/MailScanner/etc/MailScanner.conf file.

Add the following with 'crontab -e' (the minute offsets may be randomized):

37      5 * * * /opt/MailScanner/bin/update_phishing_sites
07      * * * * /opt/MailScanner/bin/update_bad_phishing_sites
58     23 * * * /opt/MailScanner/bin/clean.quarantine
#42      * * * * /opt/MailScanner/bin/update_virus_scanners
#3,23,43 * * * * /opt/MailScanner/bin/check_mailscanner

In /etc/mail/sendmail.conf MailScanner install notes recommend changing 'DAEMON_PARMS="";' to:

DAEMON_PARMS="-ODeliveryMode=d -OQueueDirectory=/var/spool/mqueue.in";

Instead, use:

DAEMON_PARMS="-ODeliveryMode=background -OQueueDirectory=/var/spool/mqueue.in";

By default, Sendmail will use a Delivery Mode of Background, which operates by forking itself and processing the message. With a MailScanner Delivery Mode of Deferred, no DNS or DB lookups are performed. QueueOnly mode will actually perform DNS lookups, which is what I need for handling the SpamHaus enhdnsbl Features, but serializes all inbound connections. Queue mode sounds like the most straight forward option for working with MailScanner but may not be just right. I think that Background will work better, as it will fork and handle simultaneous connections. However, on further testing, I find that Sendmail delivers mail with Background mode, and queues it for Sendmail with QueueOnly mode, so QueueOnly mode it is.

Rerun /usr/sbin/sendmailconfig, then '/etc/init.d/sendmail restart' to get the mta agent and queue runner running as separate processes.

Add a 'crontab -e' entry to ensure MailScanner is always running:

0,20,40 * * * * [ -x /opt/MailScanner/bin/check_mailscanner ] && /opt/MailScanner/bin/check_mailscanner >/dev/null 2>&1

Edit the /opt/MailScanner/etc/MailScanner.conf file:

  • Set 'Virus Scanning' to yes
  • Set 'Virus Scanners' to f-port-6

Test the virus scanner with '/opt/MailScanner/lib/f-prot-6-wrapper /opt/f-prot eicar.virus'.

Restart MailScanner.

[/OpenSource/Debian/email] permanent link


2009 Jul 18 - Sat

Testing HTTPS Connections with OpenSSL

To test what gets returned from port https (port 443) of a web server, connect with:

openssl s_client -connect www.example.com:443

Then put in the following, followed by two carriage returns:

GET / HTTP/1.0

[/OpenSource] permanent link


2009 Jul 17 - Fri

OpenSSL Server Certificates

To use the SSL/TLS verification and encryption features of OpenSSL based certificates for email, web, ldap, database and other similar solutions, certificates need to be created, signed, installed, and have a path to a valid certificate authority. Many people will do self-signed certificates just to get the verification and encryption capabilities for self-use. At the present time, it is possible to obtain a path to a free certificate authority. StartSSL provides free certificate signing to secure personal web sites, public forums or web mail.

To use StartSSL's services, you first need to create an account with them, which is reasonably painless. If you own your own domain and email solution, you can get your domain validated. The basic criteria is that you have access to postmaster or webmaster or hostmaster @ yourdomain.com. Once you've validated your domain, you start getting certificates signed. StartSSL has a root certificate included with the recent OpenSSL releases.

There are several ways to create a certificate and generate the associated signing request. digicert provides a page that will help generate the openssl command to create the key and csr (signing request) files. The most important item is the 'Common Name', it needs to be the FQDN (Fully Qualified Domain Name) of your server, like 'mail.example.com'. For Certificate Authorities offering a wild-card certificate which can be placed on multiple servers, the FQDN would be something like '*.example.com'. The request comes out looking like (where .key is the generated key, and .csr is the signing request to be sent to the Certificate Authority):

openssl req -new -newkey rsa:2048 \
  -nodes -out mail_example_com.csr \
  -keyout mail_example_com.key \
  -subj "/C=US/ST=NV/L=Las Vegas/O=Example Co./CN=mail.example.com"

You can take a look at the .csr (Certificate Signing Request) by:

openssl req -text -noout -in mail_example_com.csr

Take a look at the .key file by:

openssl rsa -text -noout -in mail1_oneunified_net.key

Be aware that the key generated above is generated without a password. Therefore ensure the .key file is readable only by the accounts requiring access.

The two step manual way to generate an RSA private key and signing request is:

openssl genrsa -out mail_example_com.key 2048
openssl req -new -key mail_example_com.key -out mail_example_com.csr

For the second command of the two, openssl will prompt for a number of pieces of information: Country Code, State or Province Name, City, Organization, Unit (which can be left blank), Common Name (Fully Qualified Domain Name, or a wild-carded FQDN), Email Address (which can be left blank, but use something valid anyway, as a default may be inserted by the signing authority), Password (which should be empty if being used with self-starting services), and an optional Company Name (left blank).

The content of the .csr file can then be send to the Certificate Authority for signing. After sending my file to StartSSL, they say it may take up to six hours to approve the request. It was actually returned in under an hour.

The content of a signed certificate (a .crt, .cert, or .pem file) can be viewed with:

openssl x509 -in mail_example_com.crt -noout -text

[/OpenSource] permanent link


2009 Jul 16 - Thu

Certificate Authorities

In rebuilding my servers, many of the services--such as email, vpn, ldap, database, dns--make use of authentication and encryption protocols. Many of these make use of the OpenSSL Project for implementing Secure Sockets LayerThe authentication side of things requires the use of Certificate Authorities to ensure a chain of validation to enable clients to validate that the server/service to which they are connecting is who or what it says it is.

Certificate Authorities (CA) come in various capabilities and pricing levels. When authentication is only needed within an organization, certificates can be self-signed. The simplest mechanism, but least maintainable solution, is to have each machine generate and self-sign its own certificate. When more than one machine needs a certificate, it is best to implement an organizational Certifiate Authority.

For Microsoft based networks, Microsoft has a standard level and an enterprise level Certificate Authority service. The enterprise level is required when implementing 802.1x network security protocols.

For Open Source based networks, there are Open Source based Certificate Authorities, such as OpenCA.org, SimpleCA, Home Brew, or TinyCA, to name a few. A couple of good sites discussing the steps of being your own Certificate Authority include: Be Your Own Certificate Authority, by George Notaras, and Becoming a X.509 CA, by David Pashley.

Since some of my services are open to the Internet, I need access to a public Certificate Authority. There is a free Certificate Authority known as CAcert. Its popularity appears to be growing steadily year by year. Its drawback is that it is not included as a root authority in any of the popular browsers.

StartSSL has, in addition to paid services, free digital certificates. They do have a root authority certificate in many browsers, but not in Internet Explorer. Even so, they do have an OpenID authentication service, which comes in handy for signing into the increasing number of websites offering OpenID sign in capability.

I've seen single root certifcates for as low as $9.95/yr. Many of them are resellers of RapidSSL. When compared to Thawte or VeriSign, RapidSSL seems reasonably priced, even for the WildCard product which allows multiple servers within the same domain to hold the same certificate.

Based upon some of the Certificate Authority service descriptions, the low price services cater to the low volume traffic users, whereas the higher priced certificates provide for fast authentications for high volume websites.

SSL Shopper has comparisons of some higher end public Certificate Authorities.

[/OpenSource] permanent link


2009 Jul 13 - Mon

Debian Lenny with Sendmail, Dovecot, MailScanner, SpamAssassin: Part 3

In part two of this series, I started into the installation of the Dovecot IMAP service. The IMAP serivce can use validation and encryption through the use of SSL/TLS services. SSL/TLS services require the use of Certificates signed through a Certificate Authority. Many installation directions provide information for using the simple expedient of self-signed certificates. As some of these services I'm building are quasi-public, I wanted to go through the exercise of getting my certificates signed through a Certificate Authority. As such, I was side-tracked into doing some research to come up with two intermediate articles:

I'm going to step back to my SendMail install, and get a certificate installed in order to utilize SendMail's TLS based verification and encryption capabilities.

In the /etc/mail/sendmail.mc file, the following needs to be available (I've enabled AUTH as well):

include(`/etc/mail/sasl/sasl.m4')dnl
include(`/etc/mail/tls/starttls.m4')dnl

Don't put these lines in the submit.mc file as they will cause permission errors.

For configuring AUTH (SASL2), edit /etc/default/saslauthd and make sure 'MECHANISMS="pam"' is included and then start the service: /etc/init.d/saslauthd start. Shell users should now be able to authenticate, otherwise use /usr/sbin/saslpasswd2 to add users.

You cancheck in /etc/mail/tls to see various self-signed certificates which have already been created and linked within the configuration file /etc/mail/tls/starttls.m4. The various settings can be changed to match the new certificate. I changed the line with confCACERT to match my StartCom CA found in /etc/ssl/certs. I had placed my new server key and cert in /etc/ssl/private, and in sendmail.mc, updated confSERVER_CERT and confSERVER_KEY to match.

Once the certificates are properly installed and SendMail restarted, it can be tested by connecting to telneting to port 25, running 'ehlo localhost' and looking for a line with '250-STARTTLS'. If it is there, all is well.

I found the page at SMTP STARTTLS in sendmail/Secure Switch to help somewhat in building the scenario.

For testing the STARTTLS capability, one can use the one of the following openssl commands (the first works better than the second):

openssl s_client -starttls smtp -connect localhost:25
openssl s_client -ssl3 -state -debug -msg -connect localhost:25

For other OpenSSL s_client command line parameters, visit: s_client man page.

At one point, I was getting errors in sendmail logs with:

STARTTLS=read: 12080:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:284:
STARTTLS: read error=generic SSL error (-1), errno=104, 
  get_error=error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number, retry=1, ssl_err=1

I think these are permissions related depending upon privleges of certificate files and the username under which sendmail is running. Sendmail is now running under root and no longer has these problems. The errors magically disappeared during some restart so I can't confirm this for sure. ... further information: the errors happen when running the 'openssl s_client -ssl3 -state -debug -msg -connect localhost:25' command, but not the 'openssl s_client -starttls smtp -connect localhost:25'. I havn't spent the time to determine why yet.

I was also getting errors like:

STARTTLS=client: file /etc/ssl/private/sub.class1.server.ca.pem unsafe: Permission denied
STARTTLS=client, error: load verify locs /etc/ssl/certs, /etc/ssl/private/sub.class1.server.ca.pem failed: 0

These errors went away by taking the starttls.m4 and sasl.m4 macros out of submit.mc.

[/OpenSource/Debian/email] permanent link


2009 Jul 12 - Sun

Debian Lenny with Sendmail, Dovecot, MailScanner, SpamAssassin: Part 2

Now that email is inbound and being stored, now I need a mechanism of accessing it remotely. In the past I used courier-imap. Lately, the in-thing appears to be Dovecot. It appears to be fast, simple, and effective.

The Debian package repository is not really up-to-date, so I'll have to download the source and compile. The source is Dovecot v1.2.1. I usually put it into /usr/src and 'tar -zxvf ' it to expand the source. For configuring and compiling, I used:

./configure \
  --sysconfdir=/etc/dovecot \
  --with-storages=maildir \
  --localstatedir=/var/local/dovecot \
  --with-rundir=/var/local/dovecot/run \
  --with-statedir=/var/local/dovecot/state \
  --with-pam
make
make install

A user dovecot needs to be added with 'useradd -r dovecot'.

[/OpenSource/Debian/email] permanent link


Debian Lenny with Sendmail, Dovecot, MailScanner, SpamAssassin: Part 1

I am in the process of migrating and updating my email server to something bigger-better-faster. Last time I built an email server was a number of years ago on a Redhat system. Things have changed since then. During my re-learning process, here are some notes I've made on getting Sendmail and related processes on to a Debian Lenny system.

Once upon a time, Sendmail was the MTA (Message Transfer Agent) of choice. Most Linux operating systems used it by default. Currently it looks as though Exim and Postfix are now the primary choices for an MTA on the Debian flavour of Linux. Well, I can't let my Sendmail books go to waste, so I'm sticking with Sendmail as my MTA. In this installment, I describe some of the bits I needed for getting the Sendmail part installed and partially configured.

For the system, I did a basic install of Debian Lenny 5.0.1. When the package list came up, I unselected everything, including the Email and Standard System choices. That keeps the basic operating system foot print small.

Only a few packages are needed for Sendmail:

apt-get install libsasl2-modules
apt-get install libsasl2-modules-ldap
apt-get install sasl2-bin
apt-get install openssl
apt-get install ca-certificates
apt-get install build-essential
apt-get install libssl-dev
apt-get install libpam-dev
apt-get install sendmail

I had problems with the amd64 version of Debian Lenny 5.0.1 and sendmail. I was able to build everything, but the only thing that didn't work were the 'enhdnsbl' FEATUREs. I'll have to perform the build from scratch to see if I can recreate the problem. For now, just to get things done, I built the server with 32 bit i386 and the enhdnsbl FEATURE is functioning fine. (Note: after having rebuilt this in 32 bit mode and testing the enhdnsbl feature through the course of the build, I find that the problem occures due to the MailScanner requested DAEMON_PARMS setting in sendmail.conf. This problem is discussed further in my installment 4 of this series.)

To enable saslauthd, edit /etc/default/saslauthd and set START=yes (warning). Run '/etc/init.d/saslauthd start'

The package sensible-mda is installed along with sendmail. Sensible-mda is called by the MTA, and will in turn call whichever of the following MDAs that it finds (in this order): procmail, maildrop, deliver, mail.local.

In a previous installation, I used Courier's mail drop program to get messages into a MailDir format directory. It didn't work so well this time (it was very hard to troubleshoot as it turns off debugging information in local delivery mode). Instead, procmail can delivery to Maildir format directories, so I used that instead. To make this work, /etc/procmailrc needs the line DEFAULT=$HOME/Maildir/ .

To get things done the fast easy way, I'm simply storing email in ~/Maildir until I can get an LDAP mechanism up and running.

Maildir folders store email as one file per email. File locking requirements are reduced. Mbox files store all messages in one, possible large, single file.

Just so that the /home directory isn't completely shallow and wide, I edited the /etc/adduser.conf file and changed LETTERHOMES to yes. "The created home directories will have an extra directory - the first letter of the user name. For example: # /home/u/user."

I'll try this out on the next user I create, but I believe that by creating the directory Maildir in /etc/skel, 'touch /etc/skel/Maildir' and doing a 'chmod 740 /etc/skel/Maildir', the directory will automatically be available in the new users directory.

Instead of setting up a bunch of aliases for a bunch of email addresses that default to my standard email address, I created a virtusertable. The first lines provide explicit email address to local user mappings, something like

john@oneunified.net	john

The remainder of the file has entries like:

@oneunified.net		ray

The sendmail.mc file requires a corresponding 'FEATURE(`virtusertable')dnl' line.

I'm getting ahead of my self here, but for testing the configuration, commands can be sent to sendmail by telnet to port 25 or by creating a small test content file and sending a message with a command similar to 'sendmail ray@example.com < test.msg'. Content of test message:

to:Ray Burkholder 
from:Example 
subject:test from tester

test message

dnsbl resource seems to think that SpamHaus is pretty good as a DNS based BlackList source. I had been using a number of different sources, and I needed to make things current as some dnsbl sources have disappeared or turned unreliable. I've ended up using two sources, and spamhaus seems to prevent a very large chunk of spam getting further into my system, ie, a large percentage doesn't make it through the opening shots of the Sendmail pathways.

A DNS based Black List source (dnsbl) works by taking an email originator's ip address a generating a dns query to specialized spam black list site. Based upon the response to the query, mail can be accepted or rejected immediately, without further processing. A return code is simply a loopback address flavour, with an implicit 127.0.0.1 (an empty response) being a sign of a problem free address, and anything with 127.0.0.2 or greater signifying some issue with the address. More info can be found at Spamhaus.

The two dnsbl entries I use are:

dnl FEATURE(`enhdnsbl', `example.com', `"Spam block is hardcoded"', `t')dnl
FEATURE(`enhdnsbl', `zen.spamhaus.org', `"Spam blocked see: http://www.spamhaus.org/query/bl?ip="$&{client_addr}', `t')dnl
FEATURE(`enhdnsbl', `bl.spamcop.net', `"Spam blocked see: http://spamcop.net/bl.shtml?"$&{client_addr}', `t')dnl

Before using a dnsbl, be sure to read, understand, and conform to their terms of service.

To quickly test that the enhdnsbl FEATURE if functioning (assuming you have access to a dns server for example domain example.com):

  • choose a machine from which you can telnet to sendmail on port 25
  • determine it's ip address, say in this case, 10.23.43.5
  • insert a line into the dns server similar to '5.43.23.10.example.com. IN A 127.0.0.2' (the address is backwards)
  • uncomment the enhdnsbl FEATURE in the collection of 3 above, rebuild sendmail.cf, and reload sendmal
  • telnet to the sendmail server, and you should see a 'ruleset=check_relay, arg1=[10.23.43.5], arg2=127.0.0.2, .... ' type line in mail.log

In the sendmail.mc file, I also disabled 'dnl FEATURE(`delay_checks', `friend', `n')dnl' (if it has been turned on by default) as it will accept a message, check the recipient, then perform the dnsbl lookup. This feature is for when you need to accept someone from a blacklisted address, but no one else. By disabling this, all users from the address are denied. In addition, with the option enabled, the mail.log file will have check_rcpt entries, with it disabled, the mail.log file will have check_relay entries.

To look at messages that have made it through Sendmail, have been locally delivered with procmail, a program called Mutt can be used to read the messages. By default Mutt, can read mbox mail files. A configuration change is required to read Maildir folders. The Mutt FAQ goes into more detail, but the basics are to put the folloing lines into ~/.muttrc:

set mbox_type=Maildir

set spoolfile="~/Maildir/"
set folder="~/Maildir/"
set mask="!^\\.[^.]"
set record="+.Sent"
set postponed="+.Drafts"

Richard Curnow has written a program to index, search, and create links to email messages stored in the Maildir folders.

During testing of my Sendmail configuration, from a email client, I was seeing a messages like the following:

sendmail dsn=5.0.0, stat=Service unavailable
554 5.3.0 rewrite: map access not found

It turned out to be an error in my sendmail.cm configuration file where I was missing a closing single quote. The file that processes a sendmail.mc file to create a sendmail.cf file is not very helpful in tracking down simple errors of syntax such as what caused this problem.

I don't know if it is legal or not, but I found online the Sendmail 3rd Edition. I don't know for how long the link will be valid.

[/OpenSource/Debian/email] permanent link


2009 Jun 28 - Sun

Blosxom Reinstall on Debian Lenny 5.0.1

It is almost time to retire my perl based bloging server software known as blosxom. It has performed well. However, my page count is starting to get high, and blosxom is taking longer and longer to process. For now, I've moved it to faster hardware while I work on a different blog delivery mechanism (I hope to have Wt on C++ with PostgreSQL running the back-end soon).

Copying over the directory structure was no real problem. The only real thing needed was to put the mod_rewrite stuff back in so the unsightly cgi-bin url portion is removed. By default, mod_rewrite is not enabled. To enable it:

a2enmod rewrite

Here is what the rewrite stuff as it looks in sites-enabled/default file:

  RewriteLogLevel 0
  RewriteLog /var/log/apache2/rewrite.log

<Directory "/var/www/blog">
  AddHandler cgi-script .cgi
  Options +ExecCGI
  RewriteEngine On
  RewriteCond %{REQUEST_URI} !-f
  RewriteCond %{REQUEST_URI} !-d
  RewriteRule ^(.*)$ /cgi-bin/blosxom.cgi/$1 [L,QSA]
</Directory>

[/OpenSource/blosxom] permanent link


Perl Mason Install

Installing mason v1.42 from Mason HQ is quite straight-forward:

  • apt-get install build-essential
  • apt-get install libapache2-mod-apreq2
  • apt-get install libapreq2-dev
  • apt-get install libapache2-request-perl
  • ln -s /etc/apache2/mods-available/apreq.load /etc/apache2/mods-enabled/apreq.load
  • perl -MCPAN -eshell
  • install HTML::Mason

[/OpenSource/Debian/Monitoring] permanent link


VMWare Mouse Release on Debian Lenny Guest

A simple message to myself. When installing a Debian Lenny 5.0.1 KDE guest in VMWare Workstation hosted on Windows XP, a few steps are required in order to move into and out of the guest without the ctrl-alt mouse release sequence:

  • apt-get install build-essential on the guest
  • apt-get install linux-headers-...
  • build and install the VMWare toolkit in the guest
  • add 'Option "CorePointer"' to the mouse section of /etc/X11/xorg.conf
  • add 'Option "CoreKeyBoard"' to the keyboard section of /etc/X11/xorg.conf
  • restart KDE

A visit to a VMWare Community describes a couple of additional steps for getting the VMWare Shared Folders (HGFS) Share to work inside of Lenny 5.0.1 as well. Basically, in the /etc/fstab file, which VMWare updates when you perform a tool install, append ',uid=1000,gid=1000' to the 'ttl=5' portion of the .host line, so it looks something like:

.host:/ /mnt/hgfs vmhgfs defaults,ttl=5,uid=1000,gid=1000 0 0

The actual value to be used in place of 1000 is what ever your uid,gid are for your current window session. This an be determined at the command line by:

echo ${UID}

Without the uid/gid values in the fstab file, the share is made available for the root user. Anyway, after the restart, by using the file explorer, go to /mnt/hgfs to see the volumes.

[/OpenSource/Debian] permanent link


Web Statistics with awstats on Debian Lenny 5.0.1

On an old system, I used Webalizer to analyze Apache log files. On a newer system I thought I'd give awstats a try. I had two options, install via the original source, or install via apt-get. Considering the number of files and directories involved, I decided to go with the Debian package manager to install awstats.

The package manager to get things into decent directories, but it appears to have been built for an earlier flavour of Debian. A few things I had to fix up for working in Debian Lenny 5.0.1 with Apache v2.0 include:

  • In /etc/cron.d/awstats, changing one of the file checks from /var/log/apache/access.log to /var/log/apache2/access.log
  • changing the ownership of the logs in /var/log/apache from root.adm to root.www-data (an alternative might be www-data.adm)
  • changing the creation ownership in /etc/logrotate.d/apache2 from 'create 640 root adm' to 'create 640 root www-data'
  • in /etc/awstats/awstats.conf.loal, added 'LogFormat=1' and 'DirIcons=/awstats/icon'
  • in /etc/apache2/sites-enabled/000-default, added 'Alias /awstats/icon "/usr/share/awstats/icon"'
  • the version of awstats installed was 6.5. I downloaded the awstats.pl file from awstats site and placed it in the /usr/lib/cgi-bin directly as a simple upgrade to v6.9.

During package installation, the package manager suggested some additional packages: libnet-dns-perl libnet-ip-perl libgeo-ipfree-perl. Perhaps when I get a chance, I'll install those and see what they add to the statistics management.

[/OpenSource/Debian] permanent link


2009 Jun 27 - Sat

Network Broadcast Addresses

A customer was performing penetration testing on their network. Once the test results were in, among other things, they had a couple questions about responses to certain addresses on their external subnet range.

As a background, every subnet with a network mask of /30 or shorter has three address groups:

  • first address: the zeros address aka network address
  • middle addresses: usuable addresses
  • last address: the ones address aka broadcast address

For explanation purposes, imagine a router with two interfaces:

  • interface 1, the ingress interface, with address range of 10.0.0.0/30 and interface address of 10.0.0.1.
  • interface 2, the egress interface, with address range of 10.0.0.4/30 and interface address of 10.0.0.5.

For some network devices, for a packet arriving on the ingress interface destined for the broadcast address of the egress interface (10.0.0.7), the network device will forward the packet, effectively broadcasting to all devices located in the subnet of the egress interfaces. When many packets arrive in this manner, this is known as a Smurf Attack.

Current Cisco devices, by default, no longer forward packets to broadcast addresses, but may respond to these packets. The following command is applied by default to prevent forwarding of packets to broadcast addresses:

no ip directed-broadcast

At the other end of the subnet, for the network address, I originally thought this was a quiescent address. However, I did find that the an ICMP echo request arriving on the ingress interface destined to the network address (10.0.0.4) of the egress interface will generate an echo-reply with the ingress ip address (10.0.0.1) as the source address.

It appears that in days gone past, that for BSD Unix boxes and various other equipment, the network address was *the* broadcast address. This is why some configurations allow one to configure the address of the broadcast address setting, whether it be the high end or low end of a subnet. (thanx to Steinar Haug for this info).

rfc 1122 formalizes this broadcast address configuration (thanx to an inciteful responder named Lee):

   3.3.6  Broadcasts

         There is a class of hosts* that use non-standard broadcast
         address forms, substituting 0 for -1.  All hosts SHOULD
         recognize and accept any of these non-standard broadcast
         addresses as the destination address of an incoming datagram.
         A host MAY optionally have a configuration option to choose the
         0 or the -1 form of broadcast address, for each physical
         interface, but this option SHOULD default to the standard (-1)
         form.

The host will respond with the echo-reply because of rfc 791:

   3.2.1.3  Addressing: RFC-791 Section 3.2

             ...   An incoming datagram is destined
            for the host if the datagram's destination address field is:

            (1)  (one of) the host's IP address(es); or

            (2)  an IP broadcast address valid for the connected
                 network; or

From a Cisco router perspective, the default use of the command 'no ip directed-broadcast', allows one to use a /31 subnet (two ip addresses) for point to point links instead of the usual /30 subnet (four ip addresses). One can effectively address twice as many links with the same number of addresses. This feature is mentioned in Cisco's Feature Guide: Using 31-Bit Prefixes on IPv4 Point-to-Point Links.

Coincidently, while I was writing this article, I received a note that there are a couple of TCP Security Assessment documents available:

These documents go into the details of the bits and bytes making up the TCP protocol, analyzing the reasons for the bits, how they can be misused, and suggesting counter-measures when used illegally. Theres is a detailed bibilography with active links to related papers and documents.

An idea of the scope of the document can be seen through its first level table of content:

  • The Transmission Control Protocol
  • TCP Header Fields
  • Common TCP Options
  • Connection-Establishment Mechanism
  • Connection-Termination Mechanism
  • Buffer Management
  • TCP Segment Reassembly Algorithm
  • TCP Congestion Control
  • TCP API
  • Blind In-Window Attacks
  • Information Leaking
  • Covert Channels
  • TCP Port Scanning
  • Processing of ICMP Error Messages by TCP
  • TCP Interaction with the Internet Protocol (IP>
  • References

[/Networks] permanent link


Securely Erasing Files

On a Linux system, there are a number of tools available for over-writing a file with random data and then deleting the file and hiding the name of the name of the file as well.

Of course, there are certain caveats that go along with this. If you focus only on securely deleting files, you will miss file content that may have been written to bad sectors, file journals, sectors released when files have been relocated from one area to another (as in when you edit or shorten files), and various other disk dead areas.

On popular tool is a utility called shred, and is found natively on most distributions. In the most basic form:

shred --remove filename

If you use the -v (verbose) option, you can see how many times it over-writes a file, and with what patterns it uses. It also uses a descending 0 write in order to obliterate a file name.

If you need to recurse sub-directories:

find * -depth  -type f | xargs shred --remove

If you have created then moved or erased files and want to ensure that the released content is overwritten, then you need to over-write drive free space and then release it. There are some poeople who suggest using dd to fill the free space and then use shred to overwrite and delete the single large file.

An alternative is to use scrub, a tool built by the Lawrence Livermore National Library folks. It uses various national standards for selecting suitable patterns and over-writing strategies. Source can be found at Sourceforge.

A quick way to apply all 0's to the free space of a drive:

dd if=/dev/zero of=zerofile bs=1M
sync
rm zerofile

If you can't get scrub to work, then the above command with the shred might be a good combination.

To ensure you have all the data, not just what was located in files or drive free space, one needs to apply scrub/shred to whole partitions and/or drives. The Gentoo Wiki talks about ways of securely deleting drives and partitions.

For near-absolute protection of data, I've known companies to specify that once a drive is no longer useful, that it be crushed and sent to landfill.

[/OpenSource/Linux] permanent link


2009 Jun 17 - Wed

New Release of WTL (Windows Template Library)

I've been able to start on a new project with a clean slate. For the portion residing on Microsoft Windows, I'm going to give the latest version (v8.1, build 9127) of the WTL (Windows Template Library) for crafting the GUI side of things. The latest version can be downloaded from SourceForge, which looks to have a release date of May 7, 2009.

The release notes don't indicate anything for running with the Visual Studio 2008 IDE. A blog entry at Code Gem called WTL Wizard for Visual Studio 2008 indicates that by changing a few references to registry entries in the Visual Studio 2005 script, one can get the new WTL Wizard to install in the 2008 IDE. He supplies downloadable source code in the blog entry. Another location for a patch file is located in the Yahoo Forums.

WTL has in the past been underdocumented. Besides some websites linked from the wtl.sourceforge.net web site, the WTL group on Yahoo has had a WTL Developer's Guide posted in Doc and PDF forms.

[/OpenSource/Programming] permanent link


2009 Jun 06 - Sat

The American Dream

In a recent issue of Investment News, former U.S. Comptroller General David Walker was quoted as saying:

"The American dream is not owning a house; it.s every individual having the opportunity 
to achieve their full, God-given ability, and each generation having the responsibility to 
leave the country better off and better-positioned than the next so that our children and 
grandchildren can have a better way of life than we have."

In light of the trillion dollar budget deficits which Obama's government is attempting to run up, Walker's warning is one of good reason.

[/Personal/Business] permanent link


2009 Jun 04 - Thu

Installing PostgreSQL on Debian Lenny

Release 5.0.1 of Debian's Lenny GNU/Linux distribution includes version 8.3 of PostgreSQL.

During the creation of a new Debian Lenny server, a list of software packages is provided. To make a new PostgreSQL-only server, unselect everything, including the 'Standard system', then select 'SQL Database', and proceed with the installation.

Once installation has completed, and the new server has rebooted, the PostgreSQL service is not auto-started. There are a couple of manual commands to be applied. In prior versions, PostgreSQL was auto-started. I think I understand the reasoning, particularily because it is useful for my situation.

During the server creation, I have a separate set of disks allocated for the database. By manually finishing the PostgreSQL implementation, I am able to initialize the directory location during service creation. If I have I've mounted my drives at /var/local/db, then these two commands get the PostgreSQL 8.3 service started:

pg_createcluster -d /var/local/db 8.3 main
/etc/init.d/postgresql-8.3 start

[/OpenSource/Debian] permanent link


2009 Jun 01 - Mon

Building WebGUI 7.7.8 on Debian Lenny

It is a couple of years since I built a WebGUI server. Last one I built was on a Fedora Linux box.

This article is about building the most recent beta WebGUI on a Debian Lenny Linux box. The procedure is a bit long, but there is nothing complicated.

I start with a basic Debian build that has the 'Web Server' and 'Standard Build' options selected.

There are a few packages to install first:

apt-get install ntpdate
ntpdate 0.pool.ntp.org
apt-get install ntp
apt-get install build-essential
apt-get install mysql-server-5.0
apt-get install imagemagick
apt-get install perlmagick
apt-get install exim4-daemon-light
apt-get install exim4-conf
apt-get install libcrypt-ssleay-perl libnet-ssleay-perl
apt-get install libxml-sax-perl
apt-get install libxml-sax-expat-perl
apt-get install libxml-simple-perl
apt-get install libsoap-lite-perl
apt-get install libtext-aspell-perl
apt-get install libapache2-mod-apreq2
apt-get install libapreq2-dev
apt-get install libapache2-request-perl
ln -s /etc/apache2/mods-available/apreq.load /etc/apache2/mods-enabled/apreq.load

Then using

perl -MCPAN -eshell

Install or confirm the installation of the following Perl packages:

install Bundle::CPAN
install Log::Log4perl
install Class::InsideOut
install Config::JSON
install Module::Find
install Tie::IxHash
install Net::Subnets
install Text::CSV_XS
install Tie::CPHash
install Net::LDAP
install Exception::Class
install POE::Component::IKC::ClientLite
install POE::Component::Client::HTTP
install Clone
install HTML::Packer
install Path::Class
install Scope::Guard
install HTML::TagFilter
install DateTime
install HTML::TagCloud
install DateTime::Format::Strptime
install DateTime::Format::Mail
install Class::C3
install MIME::Entity
install XML::FeedPP
install CSS::Minifier::XS
install Color::Calc
install Finance::Quote
install Net::DNS
install Crypt::SSLeay
install XML::Simple
install JavaScript::Packer
install JavaScript::Minifier::XS
install Archive::Any
install HTML::Template::Expr
install SOAP::Lite
install Weather::Com::Finder
install Image::Size
install Image::Info
install Template
install Image::ExifTool
install Business::Tax::VAT::Validation
install HTML::Highlight
install CSS::Packer
install Contextual::Return
force install Test::Class
install Test::MockObject
install Text::Aspell

Download and expand the current software from SourceForge:

cd /usr/src
wget http://voxel.dl.sourceforge.net/sourceforge/pbwebgui/webgui-7.7.8-beta.tar.gz
tar -zxvf webgui-7.7.8-beta.tar.gz

Move some files around:

mkdir /data
mv WebGUI /data/

cd /data/WebGUI/etc/
cp log.conf.original log.conf
touch /var/log/webgui.log
chown www-data.www-data /var/log/webgui.log

cp spectre.conf.original spectre.conf

mkdir -p /data/domains/www.example.com/public/extras
mkdir  /data/domains/www.example.com/logs
cp WebGUI.conf.original www.example.com.conf
cp -R /data/WebGUI/www/uploads /data/domains/www.example.com/public/
chown -Rf www-data.www-data /data/domains/www.example.com/public/uploads

Append the following to /etc/rc.local:

cd /data/WebGUI/sbin
perl spectre.pl --daemon

Add the following to /etc/apache2/httpd.conf:

PerlSetVar WebguiRoot /data/WebGUI
PerlCleanupHandler Apache2::SizeLimit
PerlRequire /data/WebGUI/sbin/preload.perl

Add the following to /etc/apache2/sites-enabled/000-default:

ServerName www.example.com
ServerAlias www.example.com
DocumentRoot /data/domains/www.example.com/public
SetHandler perl-script
PerlInitHandler WebGUI
PerlSetVar webguiConfig www.example.com.conf 
Alias /extras /data/WebGUI/www/extras
Alias /uploads /var/local/webgui/uploads

Check that all the Perl packages are loaded:

cd /data/WebGUI/sbin
./testEnvironment.pl

Create the MySQL Database:

cd /data/WebGUI/etc
mysql -e "create database www_example_com"
mysql -e "grant all privileges on www_example.com.* to webgui@localhost identified by 'password'"
mysql -e "flush privileges"
mysql -uwebgui -ppassword www_example_com < /data/WebGUI/docs/create.sql

The /data/WebGUI/etc/www.example.com.conf file may need updates:

"sitename" : ["www.example.com",example.com"],
"dsn" : "DBI:mysql:www_example_com",
"dbuser" : "webgui",
"dbpass" : "password",
"uploadsPath" : "/data/domains/www.example.com/public/uploads",
"spectreSubnets" : ["127.0.0.1/32", "123.123.123.123/32"],

Start up Spectre:

cd /data/WebGUI/sbin
perl spectre.pl --daemon

Restart the web server:

/etc/init.d/apache2 restart

Browse to www.example.com and get started!

If you would like a pre-configured WebGUI server capable of running on VMWare let me know. I can even host Virtual Sessions.

Note: there is a problem with the "uploadsPath" thing.

Note: there is a problem with the "fileCacheRoot" thing.

[/OpenSource/Debian] permanent link


2009 May 26 - Tue

VMWare Datastore Browser

I'm sure the VMWare people have hidden this on purpose... just so you think you are forced into installing command line utilities or buying licensing for their management products.

Anyway, I have a couple of ESXi 3.5 U4 servers installed. I created a Virtual Machine on one server, then used the SSH scp command to copy the Virtual Machine from one host to the other. That is all well and good, but how do you get it to show in inventory?

The answer to that is to run the VMWare Infrastructure Client. That is no problem. The trick is to click on the Summary tab while in Inventory mode, and right click on the datastore. One can then browse the datastore. And one can right click on a .vmx file to register the Virtual Machine in Inventory. That same menu allows one to upload and download images from a local computer.

I think it would have been more intuitively obvious to have the datastore(s) listed in the left hand tree, but I guess that would make too much sense.

Some random notes on ESXi 3.5 U4:

  • One needs to purchase at least the foundation license in order to get the remote command line tools to work
  • When in the ESXi console, one can use vmkfstools to create and resize virtual drives. The GUI does not allow the 'thin' command, but the vmkfstools command does. 'thin' is the ability to indicate what the overall size is, but not to preallocate all the space necessary all at once.
  • When using an Asterisk based server in VMWare, allocate at least 500MHz to the server in order to maintain non slipping time. More VMWare Timekeeping Best Practices
  • Veeam FastSCP: Veeam FastSCP- VMware ESX/ESXi managment tool FastSCP provides a fast, secure and easy way to manage files and bulk copy VMs across your VMware ESX environment.

[/Networks/VMWare] permanent link


2009 May 24 - Sun

VMWare on HP DL360 G6

I recently acquired a couple of decently configured HP DL360 G6 servers. Each boots VMWare directly from an embedded USB Token. Now that is a server that works right out of the box. And it did.

It is an excellent ability to be able to use HP's management tools to view the console remotely. I've not laid hands on the server, but I have almost complete visibility into the unit. There are about 20 different temperature sensors, I can monitor and cap power usage, evaluate processor utilization, and much more. Remote access to CDRoms is also available through a virtual media Java mechanism. I'm using that now to upgrade to U4 of ESXi.

HP has their own special image and after a bunch of searching, I found it at Software Depot Home.

I had tried the U4 version from VMWare's site, but it wouldn't install itself in the correct spot. That is when I figured that HP must have a special version. Don't try to install HP's v8.20 of management tools either. They are frought with installation problems.

[/Networks/VMWare] permanent link


Sun Java 6 on Debian Lenny 5.01

I'd think Debian Linux should get simpler all the time. Maybe not. My tricks from Installing Sun Java on Debian Lenny didn't yield the desired results.

Perhaps if I had performed a standard Debian Lenny Desktop install, I would not have had this problem. Instead, I took the expert/custom route. During the beginning of the install of Debian Lenny, I chose the advanced options where I could install a KDE desktop. I'm not sure if the standard variation would have worked out of the box, but, whatever, this one didn't.

I had to go to Debian Tutorials to find the answer, which was a basic one-liner, a long, but it required one preparation step before hand. The directory /usr/lib/iceweasel/plugins needed to be corrected first. Then the one liner could be performed: ln -s /usr/lib/jvm/java-6-sun-1.6.0.12/jre/plugin/i386/n s7/libjavaplugin_oji.so /usr/lib/iceweasel/plugins/

With that in place, I can now run Java applets in IceWeasel.

[/OpenSource/Debian/Development] permanent link


Enable SSH on VMWare ESXi

VMMWare ESXi is installed and started with SSH disabled. To enable it is an unsupported option, as it allows a user access to the console, operating system and associated file system.

My primary reason for accessing the VMWare ESXi file system (vmfs), is the ease in which one can get ISO images on to the system. When running the VMWare Infrastructure Client, during the creation of a virtual machine, the virtual CD Drive can be attached to an ISO image resident in the DataStore, with the DataStore basially being the vmfs file system.

So to get read/write access to vmfs, one needs to activate SSH on VMWare:

  • At the console of ESXi host, press Alt-F1 to access bypass the simple management window and gain access to the console window.
  • There is no prompt and no text echo, but type unsupported and hit the enter key.
  • Enter the password you've assigned for root.
  • A prompt of ~ # will become visible.
  • Use vi to edit /etc/inetd.conf.
  • Find the line that begins with #ssh and remove the #, and save the file.
  • Use ps | grep inetd to find the existing inetd process id.
  • Restart the process with kill -HUP id.
  • You will now have access via SSH.

After logging in, the default datastore can be found at /vmfs/volumes/DataStore1. I created a sub-directory there named ISO to hold my ISO images. The directory and files are accessible from the VMWare Infrastructure Client when creating a new Virtual Machine. ISO files can be retrieved with the wget command.

I havn't done it yet, but one could add a .ssh directory on /root, do the appropriate magic (covered in another article), and login with an ssh key rather than root password.

Much of the information here was extracted from a couple of web sites, with VM-Help being the primary one. It's forum entries have additional useful information.

[/Networks/VMWare] permanent link


2009 May 16 - Sat

High Performance Messaging

The most mention I hear of low latency trading is from data vendors who say their market data feeds are 'the best' because they are nearest the data source, and that their infrastructures have been designed for high availability and performance.

I've always thought though, that market data source adjacency forms only a portion of the overall delay budget. It seems to me that 'closeness' to the execution side of things is just as important, if not more so. This is confirmed through some articles I've recently seen that discuss some colocation facilities situated to optimally provide this 'betweenness', aka Smart Proximity Hosting.

The third aspect of low-latency trading resides within the compute engine, the engine that receives market data, calculates the trades, performs risk management, sends out the execution requests, and receives the execution confirmations. Copying data from and into packets as well as receiving and transmitting them can be a time consuming processing. Buffer management is a serious consideration in high frequency trading scenarios (the concept of high-frequency trading being intimately intertwinded with the concept of low-latency market data feeds).

I came across Topics in High-Performance Messaging in relation to someone's generic question about how to test throughput on links. Buffer sizing is one of many important topics in optimizing throughput and reducing latency. This paper makes obvious many of the hidden gotchas for the compute engine, the links (how many, what kind, and how they are joined), the feed types, and the supporting L2/L3 infrastructure. Even though I came across it as a generic response to throughput testing, I see it is written by a group that has spent much time on investigating low-latency issues in trading. I see the article as being very usful for optimizing additional milliseconds/microseconds out of the execution cycle time.

Another view on this low-latency issue arises in a blog entry from The Blog of James: Does the need to process volumes of data prohibit lower latency?

There is a news site dedicated to news regarding low latency trading issues: low-latency.com.

[/Trading/AutomatedTrading] permanent link


Martians

In terms of managing addresses on for the public internet, there are a set of address ranges which one should never see... publically. Privately, that is, within someone's local network, they can be seen, are seen, and should be seen.

  • 0.0.0.0/8: not seen as an address but as a default route.
  • 10.0.0.0/8: a common internal rfc 1918 range.
  • 127.0.0.0/8: localhost addresses, ie, loopbacks on individual machines, with 127.0.0.1 the most common. I've used addional addresses for setting proxy forwarding with ssh port forwarding configurations
  • 169.254.0.0/16: rfc 3927 for internal networks without dhcp and no addressing structure
  • 172.16.0.0/12: a common internal rfc1918 range.
  • 192.0.2.0/24: rfc 3330 for documentation and example code
  • 192.168.0.0/16: a common internal rfc1918 range.
  • 198.18.0.0/15: rfc 2544 network benchmark tests
  • 223.0.0.0/8: reserved
  • 224.0.0.0/3: multicasting

More information on IPv4 addressing can be found at Wikipedia.

[/Networks] permanent link


2009 May 03 - Sun

Open Source Site of the Day: ModSecurity -- Open Source Web Application Firewall

mod_security is an actively maintained web application firewall. From my reading, it looks like it is a filter for processing web requests before they hit a company's main web server. It performs a series of different check and balances: looks at http headers for correctness, does common checks on field content so as to prevent injection attacks, and through a command language, can perform so complex analysis within a request as well as across requests.

In can be used as an appliance in-line or out-of-line, or can be used as a module right on the web server. The company defines their 'Web Application Firewall' as a reverse proxy with additional security related features.

Is is an adjunct to a firewall, which can only do some basic session state analysis. There is one slide in a presentation on the site which provides a good summary of its capabilities:

  • Monitoring: know what happened
  • Detection: know when you are being attacked
  • Prevention: stop attacks before they succeed
  • Assessment: discover problems before the attackers do

It looks like mod_security is a very good tool for helping web developers protect themselves from things they don't know. Web developers focus more on content and less on security. This tool helps rebalance the problem.

SANS is a good place to start learning about security.

[/OpenSource/SiteOfTheDay/D200905] permanent link


Time Series Analysis on RRD Files

Crist Clark, in a posting on the NANOG mailing list, started an interesting thread on analyzing network traffic based upon frequency analysis rather than the traditional time based analysis. He started the thread by asking about Fourier Analysis on network traffic time series. A number of responses indicated that Wavelet Analysis might be the 'more modern' approrach. This type of analysis has been used for Network Traffic Anomoalies Detection. The responses indicate that operating systems can be deduced through analysis of RTD (Round Trip Delay) of ping generated traffic.

The thread started with:

Crist Clark started:

Has anyone found any value in examining network utilization numbers with Fourier analyses? After staring at pretty MRTG graphs for a bit too long today, I'm wondering if there are some interesting periodic characteristics in the data that could be easily teased out beyond, "Well, the diurnal fluctuations are obvious, but looks like we may have some hourly traffic spikes in there too. And maybe some of those are bigger every fourth hour."

Dave Plonka Responded:

Such techniques are used in the are of network anomaly detection. For instance, a search for "network anomaly detection" at scholar.google.com will yield very many results.
Our 2002 paper, "A Signal Analysis of Network Traffic Anomalies" [ACM SIGCOMM Internet Measurement Workshop 2002, Barford, et al.], is one such work. We mention that we use wavelet analysis rather than Fourier analysis because wavelet/framelet analysis is able to localize events both in the frequency and time domains, whereas Fourier analysis would localize the events only in frequency, so an iterative approach (with varying intervals of time) would be necessary. In general, this is the reason why Fourier analysis has not been a common technique used in network anomaly detection.
That work used data stored in RRD files at five minute intervals. Our subsequent work used data stored at one second intervals, again in RRD files.

Anton Kapela had a couple of messages and a link (look for Kapela):

Indeed, there are. Interesting things emerge in frequency (or phase) space - bits/sec, packets/sec, and ave size, etc. - all have new meaning, often revealing subtle details otherwise missed. The UW paper [Barford/Plonka et. al] is one of my favories and often referenced in other publications.
Along similar lines, I presented a lightning talk at nanog that demonstrates using windowed Ft's (mostly Gaussian or Hamming) in three-axis graphs (i.e. 'waterfalls') available in common tools (buadline, sigview, labview, etc) for characterizing round trip times through various network queues and queue states. Unexpectedly, interesting details regarding host IP stacks and OS scheduler behavior became visible.
I want to suggest that time windowed Ft might be a reasonable middle ground, certainly for Crist's case. Naturally, the trade-offs will be in frequency accuracy (ie. longer window) vs. temporal accuracy (ie. short window). Another solution for your needs might be cascaded FIR "bandpass" filters, but again, you're subject to time/frequency error trade-offs as related a filter's bandwidth.
While you're at it, consider processing your time series data into histogram stacks, or nested histograms. I haven't specifically seen a paper covering this, but another UW gent (DW, are you reading this?) used to process their 30 second ifmib data into a raw .ps file, and printed this out weekly/daily. The trends visible here were quite interesting, but I don't think much further work was done to see if anything super-interesting was more/less visible in this form than traditional ones.
... one point - since packets/bits/etc data is more monotonic than not (math wizards, please debate/chime in) and since it's not a 'signal' in the continuous sense, you might find value in differentially filtering the input data *before* FT or wavelet processing. This would serve to remove the weird-looking "DC" offset in the output simply by creating a semi-even distribution of both positive and negative input sample values.

[/OpenSource/Debian/Monitoring] permanent link


Routing Within An ISP

Many ISP's I've seen have had two routing protocols implemented: BGP to talk to the 'internet' with the external /24 and shorter prefixes, and an internal routing protocol such as EIGRP or OSPF to handle the internal /24 and longer prefixes. The internal protocol would be running on all ISP devices and would handle all infrastructure devices and customer links. For a multi-homed ISP, BGP would need to be running on all internal devices that form internal paths from one external link to another. This provides an ability to choose an appropriate exit point for any traffic generated from within an ISP destined for the external network. Some ISP's 'cheat' by generating default routes to the nearest exit and having BGP reside only on edge devices. Some optimum paths will be missed using this simplified arrangement, particularily if an ISP is connected to non-transit neighbors.

Current best practices make expanded use of BGP. BGP, known as IBGP, is used extensively within the ISP to carry customer prefixes. The internal routing protocol such as OSPF or EIGRP is used simply for carrying infrastructure routes such as loopback addresses and link addresses.

With this arrangement, it is then easy to make use of MP-BGP (Multi-Protocol BGP) to handle the various requirements for carrying MPLS links.

One presentation at RIPE shows some basics of BGP Best Practices.

[/Cisco] permanent link


64 Bit Data Models

As we move to 64 bit processors, variable types and their widths change. I had originally thought that there would be a consistent naming convention as one moved from 32 bit programming to 64 bit programming. At a 64 Bit Wiki Entry, I find that such is not the case. Different compilers choose different ways. For example the Microsoft VC compiler will use the LLP64 model which keeps an int as 32 bits. This is something that one needs to keep in mind when re-compiling software created for 32 bit processors in a 64 bit environment.

In the same article, mention is made that it is a good habit to make use of 'ptrdiff_t', which is declared in , when subtracting two pointers and using the result.

[/Personal/SoftwareDevelopment] permanent link


2009 Apr 26 - Sun

Boost Preprocessor: Arrays

Typically, in some form of C++ best practice summaries, it is recommended to stay away from using the C++ Macro Preprocessor. For the most part, except when I needed to Microsoft MFC message maps, where use preprocessor macros, I have followed this maxim. Until now.

I came across a situation where one section of code is dependent upon the order of declarations in another section of code. With manual code preparation, and even if things are documented appropriately, it is easy to forget to update the inter-related sections of code properly.

An example is when initializing the column definitions of an MFC CListView. I'd like to construct an enumeration of column indexes and ensure those remain in-sync with any changes I may make to the CListView column defintions themselves.

I hadn't realized the power of the C++ macro preprocessor until I started reading Appendix A: An Introduction to Preprocessor Metaprogramming in the book "C++ Template Metaprogramming" by David Abrahams and Aleksey Gurtovoy.

By using the Boost Preprocessor Library, the power of the C++ Macro Preprocessor is realized.

I can now define my column structures and associated variables in a single header file. I also define various extraction macros.

#include "boost/preprocessor/tuple/elem.hpp"
#include "boost/preprocessor/array/elem.hpp"
#include "boost/preprocessor/array/size.hpp"
#include "boost/preprocessor/punctuation/comma_if.hpp"
#include "boost/preprocessor/repetition/repeat.hpp"


#define COLHDR_DELTAS_ARRAY_ELEMENT_SIZE 6
#define COLHDR_DELTAS_ARRAY \
  (15, \
    ( \
      (COLHDR_DELTAS_COL_UndSym, "UndSym", LVCFMT_LEFT,  50, std::string, m_sSymbolUnderlying), \
      (COLHDR_DELTAS_COL_Sym   , "Sym",    LVCFMT_RIGHT, 50, std::string, m_sSymbol), \
      (COLHDR_DELTAS_COL_Strk  , "Strk",   LVCFMT_RIGHT, 50, double,      m_dblStrike), \
      (COLHDR_DELTAS_COL_Expiry, "Expiry", LVCFMT_RIGHT, 50, ptime,       m_dtExpiry), \
      (COLHDR_DELTAS_COL_Bid   , "Bid",    LVCFMT_RIGHT, 50, double,      m_dblBid), \
      (COLHDR_DELTAS_COL_BidSz , "BidSz",  LVCFMT_RIGHT, 50, int,         m_nBidSize), \
      (COLHDR_DELTAS_COL_Sprd  , "Sprd",   LVCFMT_RIGHT, 50, double,      m_dblSpread), \
      (COLHDR_DELTAS_COL_Ask   , "Ask",    LVCFMT_RIGHT, 50, double,      m_dblAsk), \
      (COLHDR_DELTAS_COL_AskSz , "AskSz",  LVCFMT_RIGHT, 50, int,         m_nAskSize), \
      (COLHDR_DELTAS_COL_Pos   , "Pos",    LVCFMT_RIGHT, 50, int,         m_nPosition), \
      (COLHDR_DELTAS_COL_AvgCst, "AvgCst", LVCFMT_RIGHT, 50, double,      m_dblAverageCost), \
      (COLHDR_DELTAS_COL_Delta , "Delta",  LVCFMT_RIGHT, 50, double,      m_dblDelta), \
      (COLHDR_DELTAS_COL_Gamma , "Gamma",  LVCFMT_RIGHT, 50, double,      m_dblGamma), \
      (COLHDR_DELTAS_COL_UnRlPL, "UnRlPL", LVCFMT_RIGHT, 50, double,      m_dblUnrealizedPL), \
      (COLHDR_DELTAS_COL_RlPL  , "RlPL",   LVCFMT_RIGHT, 50, double,      m_dblRealizedPL) \
      ) \
    ) \
  /**/

#define COLHDR_DELTAS_EXTRACT_COL_DETAILS(z, n, m, text) \
  BOOST_PP_TUPLE_ELEM( \
    COLHDR_DELTAS_ARRAY_ELEMENT_SIZE, m, \
      BOOST_PP_ARRAY_ELEM( n, COLHDR_DELTAS_ARRAY ) \
    )

#define COLHDR_DELTAS_EXTRACT_ENUM_LIST(z, n, text) \
  BOOST_PP_COMMA_IF(n) \
  COLHDR_DELTAS_EXTRACT_COL_DETAILS( z, n, 0, text )

#define COLHDR_DELTAS_EMIT_InsertColumn( z, n, VAR ) \
  m_vuDeltas.InsertColumn( VAR++, \
    _T(COLHDR_DELTAS_EXTRACT_COL_DETAILS(z, n, 1, ~)), \
    COLHDR_DELTAS_EXTRACT_COL_DETAILS(z, n, 2, ~), \
    COLHDR_DELTAS_EXTRACT_COL_DETAILS(z, n, 3, ~) \
    );

#define COLHDR_DELTAS_EMIT_DefineVars( z, n, text ) \
  COLHDR_DELTAS_EXTRACT_COL_DETAILS(z, n, 4, ~) \
  COLHDR_DELTAS_EXTRACT_COL_DETAILS(z, n, 5, ~)\
  ;

Then in my class declaration, I can extract the enumerations in the correct 0-based order:

  enum enumColHdrDeltasCol {
    BOOST_PP_REPEAT( BOOST_PP_ARRAY_SIZE( COLHDR_DELTAS_ARRAY ), COLHDR_DELTAS_EXTRACT_ENUM_LIST, ~ )
  };

The repetitive code of creating the columns in the CListView is handled through repetition and extraction macros:

  int ix = 0;
  BOOST_PP_REPEAT( BOOST_PP_ARRAY_SIZE( COLHDR_DELTAS_ARRAY ), COLHDR_DELTAS_EMIT_InsertColumn, ix )
  // m_vuDeltas.InsertColumn( ix++, "UndSym", LVCFMT_LEFT, 50 );

I'll be able to further use the initial structure to create the row factory for keeping the CListView and row-structures synchronized. If I happen to change my mind on column ordering, all related code sections are automatically updated.

[/Personal/SoftwareDevelopment/CPP] permanent link


2009 Jan 02 - Fri

Wanted: A Single C++ Singleton

The Singleton Concept is a reasonably simple concept. For writing software, the concept of a singleton stipulates that only one instance of a class will be instantiated during run time. All references to an object of a particular class will be to only one instance. The instance is usually created at program startup, often times before 'main', and destroyed at programs end, often after 'main' exits.

I wish to use the concept of a Singleton in my C++ based trading software for Manager classes which keep track of Providers, Instruments, and Portfolios.

Thinking that the Singleton Design Pattern was simple, I figured I could implement my own flavour. But I decided to do some research first. It turns out there are simple ways, complicated ways, and controversial ways.

From the controversial side, some consider using Singletons as less than desirable, as the concept introduces global state, which in turn reduces modularity, compartmentalization, and as a consequence increases the complexity of testing. These are valid reasons, and I see it being applicable to situations where a programmer uses Singletons for small objects or built-in data types.

In my situation, I wish to form Singletons of larger self-contained classes. It doesn't make logical sense to have multiple managers, and as such, the Singleton concept enforces/implmements my need for singleton managers.

With the Boost Libraries being as comprehensive and well written as they are, I figured they should offer up a good singleton of a Singleton implementation. Nope. In doing a search through the library, I find about four or six or more different flavours of singleton.hpp, and nothing in the 'common areas' of the library:

  • boost/serialization/singleton.hpp: uses boost::noncopyable, has medium multi-threading capability, and contains some .dll dependencies
  • boost/log/detail/singleton.hpp: uses boost::noncopyable, is a simpler class, not sure if it is thread safe, and makes use of #defines like BOOST_ONCE_INIT
  • boost/pool/detail/singleton.hpp: a simple, self-contained class designed for instantiation before main, basically a type of Meyer's Singleton but using a template mechanism
  • boost/thread/detail/singleton.hpp: a very minimalistic singleton

There are notes in some locations indicating that these shouldn't be used as they are essentially 'library internal' routines and are subject to non-documented changes.

hmmm, maybe Singletons are complicated. Alex Ott's Blog mentioned that there was actually a Singleton submission (documentation) to Boost back in the beginning of 2008. The submission promised to handle single threaded and multi-threaded implementations of the Singleton Pattern. It was rejected. Reviewers wanted to see a more modularized approach. The submitter indicated that he has/had run out of time to do so. In reading the review thread, a number of writers didn't like the complexity of the Loki library, and thus the Boost submission took a different tact. I'm of an impression that the library submission would have a good change of succeeding if it followed the 'programming by policy' method used in Loki. Under the hood there is some complexity, but to the user, the interface is clean and modular. For those interested in the submission code, It resides mouldering in the sandbox.

Loki, which was started by Andrei Alexandrescu in his book on "Modern C++ Design", has a Singleton implementation, but no on-line documentation. I went into his book and I see that he goes into some detail on the design ideas and usage notes regarding his Loki SingletonHolder class. Indeed, he says that there is no single size-fits-all singleton. And when looking at the doxygen class notes, there is a variety of construction, lifetime, and threading template traits available. Perhaps this file could be spruced up and submitted to Boost.

Andrei and Scott Meyers wrote a paper back in September 2004 called C++ and the Perils of Double-Checked Locking. It goes into the gory details of why multi-threaded Singletons are so hard to implement. Much of it has to do with compiler optimizations, and the fact that C++ machine states are defined for single threaded models only. It is interesting to note that implication that C++ is really multi-threaded, from a philosophical and design perspecitve. It has been forced into that world with assemlby code and operating system api work arounds. C++ has been so malleable when it comes to metaprogramming, object oriented programming, and any of a number of other programming paradigms. To fall down on the job of multi-threading may be an indication of the difficulties inherent in moving from single-threading to multi-threading and multi-core processing.

In addition to the library submitted to Boost, another author offers up his version of a Thread-Safe C++ Singleton. His writing indicates he uses the concept of a Phoenix Singleton, a Singleton which can recreate itself. The book Modern C++ Design goes into a description of this.

For a simple, single-threaded, self contained Singleton which manages itself, a C++ Singleton Pattern is available. It uses the Curiously Recurring Template Pattern (CRTP). It uses an override of new and delete but does not use reference counting to keep things straight, which may cause problems in some use cases. It is like a Phoenix Singleton but doesn't really do LIFO type creation/destruction properly.

Scott Meyer's "Effective C++ Third Edition" has a description of what has been termed Meyer's Singleton. It does all the constructor, destructor, and assignment hiding and provides a built-in static method for returning a reference to the object instance. Here is a specific version of The Meyers Singleton.

class InstrumentManager {
public:
  static InstrumentManager &Instance() {
    static InstrumentManager _InstrumentManager;  // local static object initialization
    return _InstrumentManager;
  }
  void BasicMethod( void );
private:
  InstrumentManager();  // constructor (ctor) is hidden
  InstrumentManager( InstrumentManager const & );  // copy ctor is hidden
  InstrumentManager &operator=( InstrumentManager const & );  // assignment operator is hidden
  ~InstrumentManager()  // destructor (dtor) is hidden
};

A Generic Meyers Singleton:

// singleton.h
#ifndef __SINGLETON_H
#define __SINGLETON_H

template class CSingleton {
public:
  static T& Instance() {
    static T _instance;
    return _instance;
  }
protected:
private:
  CSingleton();          // ctor hidden
  ~CSingleton();          // dtor hidden
  CSingleton(CSingleton const&);    // copy ctor hidden
  CSingleton& operator=(CSingleton const&);  // assign op hidden
};

#endif

In summary, I think I'll end up using the boost::detail::pool as it can be used to wrap general classes without resorting to writing classes as specific Meyers Singletons, which may or may not be a good thing. If, at some point in the future, I get some free time, tackling the Loki Singleton to Boost Singleton conversion might be an interesting learning experience.

See also Singleton Per Thread.

[/Personal/SoftwareDevelopment/CPP] permanent link



New blog site at: Raymond Burkholder - What I Do

Blog Content ©2013
Ray Burkholder
All Rights Reserved
ray@oneunified.net
(519) 838-6013
(441) 705-7292
Available for Contract Work
Resume

RSS: Click to see the XML version of this web page.

twitter
View Ray 
Burkholder's profile on LinkedIn
technorati
Add to Technorati Favorites



December
Su Mo Tu We Th Fr Sa
   
   


Main Links:
Monitoring Server
SSH Tools
QuantDeveloper Code

Special Links:
Frink

Blog Links:
Quote Database
Nanex Research
Sergey Solyanik
Marc Andreessen
Micro Persuasion
... Reasonable ...
Chris Donnan
BeyondVC
lifehacker
Trader Mike
Ticker Sense
HeadRush
TraderFeed
Stock Bandit
The Daily WTF
Guy Kawaski
J. Brant Arseneau
Steve Pavlina
Matt Cutts
Kevin Scaldeferri
Joel On Software
Quant Recruiter
Blosxom User Group
Wesner Moise
Julian Dunn
Steve Yegge
Max Dama

2009
Months
Dec




Mason HQ

Disclaimer: This site may include market analysis. All ideas, opinions, and/or forecasts, expressed or implied herein, are for informational purposes only and should not be construed as a recommendation to invest, trade, and/or speculate in the markets. Any investments, trades, and/or speculations made in light of the ideas, opinions, and/or forecasts, expressed or implied herein, are committed at your own risk, financial or otherwise.