drink the sweet feeling of the colour zero

An outsider’s guide to Silicon Valley

Tags:

I’ve recently had opportunity to spend the better part of a week in the San Francisco bay area. I explored, schmoozed, and took notes on the people, the culture and the interactions I witnessed. This trip has been an eye opener for me; it has helped me to understand a lot about the mentality and motivations that drive the people who make the software I use every day.

In Silicon Valley, ideas have value. This, by far, is the biggest take away from the entire trip. It is a dramatic contrast to the culture of my home province of Alberta. Back home, ideas are an inconvenience. Ideas are not something that worker bees are paid to have, and they certainly aren’t paid to express them to the rest of the hive.

The currency of Alberta is labour: the implementation of other people’s ideas. My entire life, I’ve been taught that ideas are delivered by rote learning: you are taught them in schools, read them in magazines, or receive them from someone higher up the food chain. I’ve been lucky to have the occasional client who is interested in my ideas, but as someone who regularly generates a great many ideas I have often felt out of place.

Silicon Valley is different; here, ideas are encouraged from all sides. There is a recognition that everyone can have a good idea, and that these ideas can lead to real money. Ideas have such value here that they are closely guarded; the easiest way to draw quizzical stares is to share ideas freely. Outside of corporate brainstorming sessions, this just isn’t done. The generation of ideas is expected to occur under NDA, or at the very least with a fat contract involved.

Once the shock of wears off and people realize that you have a different viewpoint and are willing to share ideas freely, the notepads, recorders and cameras come out. There’s money to made off your intellectual generosity, and this is an entire culture dedicated to the business of doing just that.

The generation of ideas is key to the survival of Silicon Valley. Novelty is what provides these corporations the momentum necessary to avoid stagnation and eventual death. A culture that treats ideas as currency balks at the mere implementation or refinement of extant concepts; “doing one thing, but doing it really well” leads to boredom, and eventually staff exodus.

Silicon Valley is filled with people who do not necessarily crave being the best. It is filled with people who crave the challenge of new intellectual puzzles. Put enough universities in a confined area, foster a culture of academia and intellectual reverence and you have a multi-million-person idea factory. Once the ideas are past the brainstorming stage however, it is often time to take them out of the valley.

I ran into a lot of folks form the southern US on my trip. I began to realize that there was a very good reason that so many of these folks were present in the valley. They represented the implementers. They are here to take an idea back to a completely different culture. The gentlemen from the southern United States worked closely with their west cost brethren, but they valued above all the art of doing.

In companies that have established a good cadence for bringing products to market a familiar pattern has emerged. Once an idea has proven to have a market, it is handed off to a separate team to implement, refine and slowly evolve. These people glory in perfection; the art of doing a job right. They are not a culture of dreamers and ideologues, they are a culture of engineers. Craftsmen of the highest calibre who take pride in their work, for they believe it is the quality of that work which stands as a testament to their existence.

This is the yin and the yang of the IT world. Implementation is worth nothing without something to implement; novelty is worth nothing if the idea cannot be made to work.

The third leg of the IT tripod were those folks I encountered from the east coast. There were not as many of these folks, but the impression they provided was overwhelming. The take away from the east coasters is that they grok people. They are salesmen of the very highest calibre: charisma and charm, schmooze and cunning.

These folks were in the valley for one reason: to bring completed products to market. Their job is to figure out what the current state of affairs is, figure out how the product can be sold, and then go forth and sell it.

These are no slimy used car salesmen trying to upsell trash for a margin. These are people who look for break points in development where a product can be marketed before entering another round of refinement and evolution. These are people who do numbers; risk assessments and projections, focus groups and polling. They are the social glue that binds the culture of ideas to culture of implementation and prepares the combination for consumption by the masses.

Nothing is so simplistic of course; neither of these cultures is so segregated that one cannot exist without the other. While it is certainly possible to run a successful business confined to either region, the overall cultures they nurture – the everyday subconscious signals that each of us receive from those around us since birth – place a completely different societal value on different skillsets in each of these places.

The cultures we are raised in have a noticeable effect on our ability to take advantage of our genetic predispositions; to recognize and utilize our innate talents to their fullest. Finding the right culture for a given set of requirements – corporately as well as socially – can help us grow as businesses and as individuals.

I now understand how Google went from a single rack of servers to the GDP of a small nation, or how Hewlett and Packard could start a technology revolution from their garage. This realization has real world impacts on a personal scale as well. It is by combining approaches to problem solving that we will see optimal results. Don’t try to force the “one true way” on everyone. Instead, it is worth the time to get to know – and nurture – the natural talents of those you work with.

NFS Client in Windows 7 Pro

Tags: , , ,

I realise this is a little late to the game, but I find Microsoft’s attitude towards end users offensive.  Take for example the statement “NFS Client isn’t something we usually support here” because the “Microsoft Answers website since Answers is directed towards consumers” is offensive.  Consumers are increasingly operating in heterogeneous environments thanks in no small part to Microsoft’s steadfast insistence on not actually listening to its customer base.  For better or worse, Mac desktops and notebooks are seeing a dramatic rise, especially within the North America.  Microsoft knows this.

This has a direct effect on the topic at hand in that consumer level devices are now increasingly being shipped not only supporting NFS, but with NFS as the default protocol.  NFS (and similar heterogeneous cohabitation technologies) quite simply are consumer-level technologies today.  Attempting to proclaim it otherwise because it doesn’t meet with the party line on the topic does nothing but further alienate the customer base.

Not that the arbitrary stratification of versioning that leaves those of us with “Windows 7 Professional” operating systems out in the cold hasn’t done that already.

That rant over and done with, let’s get around to actually helping people here!  Some NFS client information of relevance to real people, in the real world:

1) A Google Code project that brings NFS v2/3 support to Windows/ NFS 4.1 support is under active development, but not yet supported: nekodrive.  Quite frankly, this isn’t quite ready for prime time, unless you are willing to be a little nerdy about it.  It is okay for one-off work, but doesn’t operate nearly as seamlessly as a proper client.

2) The University of Michigan NFS v 4.1 client. This is the exact same client for NFS 4.1 that Microsoft included in Windows 8.  (Indeed, Microsoft funded its development.)  It is located here.  However, it does take a little bit of knowledge to install.  I have found it easily scriptable for installs on a mass scale, and certainly not a problem for installs on my home machine.

The project maintains a code regular code drop, and the binaries can be accessed here.  Alongside the install instructions above, any novice computing enthusiast who has actually typed “start, run, CMD” before will be perfectly able to get a top notch NFS 4.1 client up and running on Windows 7 Professional.

I can’t recommend the this 4.1 client enough.  If you have NAS devices supporting NFS 4 (for example, a Synology with the latest DSM), this client is great a bridging the gap between Windows and Mac.

3) There was a company called Labtam that once made a relevant product.  The website is still up, however all indications are that they ceased to exist towards he end of 2009.  It may be worth further investigation to see if they have sold the tech on to someone, as the internets claims it was reasonably reliable for NFS v3.  At $40, it’s significantly cheaper than an “anytime upgrade,” and has the additional bonus of neither condoning nor encouraging Microsoft’s arbitrary product segregation.

Will Windows 8 – presuming you can stomach Metro – be more of the same?  Or will the reduced edition count lead to an unprecedented breakout of sanity?  Somehow, I doubt it.

Basic Linux Bandwidth Shaping

Tags: , , , ,

This post is largely for my own personal reference.

Bandwidth shaping has traditionally been very difficult.  To truly understand it you must know a fair amount about networking.  The tools are somewhat arcane.  Fortunately, some folks have given us all a leg up by greatly simplifying the process.  It is still far from “easy peasy,” but it is also no longer the black art it once was.

I have recently had cause to configure bandwidth shaping on my edge Linux routers.  For this task I have made use of the Hierarchy Token Bucket (HTB) queuing that is part and parcel of modern Linux kernels.  I set up the HTB init script on my edge Linux routers, and need to document exactly how I installed the whole thing.  This includes the HTB Webmin module, which I use as a visual reference to tell me if I have configured the HTB text files properly.

This setup involves the following:
– HTB Init Script http://sourceforge.net/projects/htbinit/
– HTB Webmin Module http://sehier.fr/webmin-htb/
– Webmin http://www.webmin.com/
– CentOS 5.5 http://centos.org/

This setup presumes the following:

– You have setup and configured a CentOS 5.5 system
– You have configured the networking on this system to serve as a network router
– You have installed Webmin
– You have properly configured the firewall to allow access to Webmin (default port: 10000)
– You have a basic working knowledge of Linux commands (wget, chmod, chown and similar.)
– You have a basic working knowledge of Webmin (how to add modules and navigate the GUI.)

Step 1:  Install the htb.init script.

Wget my modified HTB Init script (http://www.trevorpott.com/downloads/htb/htb.init) into /etc/init.d.  Chmod it 0755 and chown root:root.  This is necessary to use my modified version because of an incompatibility in the original script’s use of the “find” command.  (-maxdepth was improperly positioned and blew up under CentOS 5.5)

The original unmodified script is available on sourceforge here.

Step 2:  Install the Webmin HTB module.

Webmin –> Webmin Configuration –> Webmin Modules
Select “third party module from” and enter http://sehier.fr/webmin-htb/webmin-htb.tar.gz
(Note: also cached here: http://www.trevorpott.com/downloads/htb/webmin-htb.tar.gz)
Select “Install Module”.

This will properly create the Webmin module directory under /etc/webmin, as well as register the module with Webmin itself.  Sadly, as this is not a .wbm, there are some bugs.

Wget the .tar.gz into /etc/webmin/ and tun “tar -zxvf” against it.  This will unpack the files into the /etc/webmin/htb directory with all the proper permissions.  You can now delete the webmin.htb.tar.gz file.

Step 3:  Setting up the config files.

Create the directory /etc/sysconfig/htb.  The directory should be chmoded to 0755 and chowned root:root.  This directory houses the files that the htb.init script will use to configure htb on your system.  A sample configuration is provided here: http://www.trevorpott.com/downloads/htb/archive.tgz

An explanation of how these files work is provided below.

Step 4: Install the DAG::Tree PERL library.

Enter the following (without the quotes) into the command line: “cpan -i Tree::DAG_Node”.  When it asks if you would like to build manually, enter “no.”

Step 5: Check the configuration.

To check your config using Webmin do the following:
Webmin –> Networking –> Hierarchy Token Bucket queuing.  This module will allow you to configure HTB as well as provides alerts if there are misconfigurations.

To check the configuration using the command line do the following:
/etc/init.d/htb.init compile

Step 6:  Start the service.

If you are satisfied with the configuration, then in Webmin do the following:
System –> Bootup and Shutdown
Select “htb.init” and click “start now and on boot.”

How the config files work:

The configuration for the HTB init script is picked up both from the naming of the config files and thier content.  To really understand how HTB works you should read the user manual.  The significantly dumbed down version is as follows:

HTB provides a method to provide bandwidth shaping using your Linux box.  You set up a “root” class that contains the total bandwidth you wish to allocate for one group of IP addresses or ports.  You then create classes which are subordinate to this root class.  Each of these classes can be (and indeed should be) guaranteed a minimum amount of bandwidth.  All classes subordinate to the root can also be configured with a ceiling.  The ceiling parameter is the maximum bandwidth that class can consume.

Should the root class have extra unconsumed bandwidth available, (because one of the other subordinate classes is not consuming it’s full allotment,) then any subordinate classes requiring bandwidth above any beyond their minimum guaranteed amount will be able to “borrow” bandwidth from another class subordinate to the same root.

Subordinate classes can for example be an IP address, a subnet or a specific class of traffic such as “all traffic to port 80.”

A working real world example (as per the provided archive.tgz) is thus:

I have a small Linux box configured with two network interfaces: eth0 and eth1. My provider offers me a 100Mbit pipe, however they charge me based on throughput usage rather than total bandwidth consumption. Thanks to this, I wish to limit my total possible throughput consumption to 15Mbit symmetrical.

For the purposes of this demo file, I have changed all the IP addresses involved to be on the 10.0.0.128 /27 subnet. In the real world the subnet in use has externally addressable addresses as I am using this system to shape throughput to a subnet provided me by my ISP.

eth0 is the interface going out to my ISP.
eth1 is the interface on which my ISP-delegated subnet can be found.

By shaping traffic on the eth0 interface I can control the speed of traffic flowing from my provisioned subnet to my ISP. (Upstream traffic.)

By Shaping traffic on the eth1 interface I can control the speed of traffic flowing from my ISP to my provisioned subnet. (Downstream traffic.)

The file eth0 contains one line: default=91. This tells HTB that the default class for all unclassified traffic on eth0 will be 91. The file I have setup to define this is eth0-2:91.default_up

The file eth0-2.root_up defines the root class for eth0. The root class is the number 2. The HTB init script infers this from the filename. Everything before the dash (eth0) is the interface. Everything after the dash but before the period (2) is the class. Everything after the period (root_up) is the “friendly name” of the class.

Looking at the default file we see that is has a colon. As with the root class, everything before the dash (eth0) is the interface. Everything after the dash but before the period (2) is the class. Since this class has a colon (2:91), the script will parse this as being “class 91, subordinate to class 2.” Everything after the period (default_up) is the “friendly name” of the class.

You will notice also several files with “friendly names” consisting of three numbers. These “friendly names” are simply the last octet of the IP address that rule is defining bandwidth shaping for.

The logical hierarchy defined by the filenames and their contents is as follows:

-eth0: all unclassified traffic will use class 91
–class 2 (root_up): Maximum throughput of 15Mbit.
—class 11 (137): Guaranteed 1Mbit, Ceiling of 15Mbit, SRC 10.0.0.137
—class 21 (157): Guaranteed 5Mbit, Ceiling of 15Mbit, SRC 10.0.0.157
—class 31 (133): Guaranteed 1300Kbit, Ceiling of 15Mbit, SRC 10.0.0.133
—class 41 (132): Guaranteed 600Kbit, Ceiling of 15Mbit, SRC 10.0.0.132
—class 51 (158): Guaranteed 600Kbit, Ceiling of 15Mbit, SRC 10.0.0.158
—class 61 (144): Guaranteed 4Mbit, Ceiling of 15Mbit, SRC 10.0.0.144
—class 71 (136): Guaranteed 500Kbit, Ceiling of 15Mbit, SRC 10.0.0.136
—class 91 (default_up): Guaranteed 2Mbit, Ceiling of 15Mbit, Burst 15k

-eth1: all unclassified traffic will use class 91
–class 2 (root): Maximum throughput of 15Mbit. Burst in 15k increments.
—class 10 (137): Guaranteed 4Mbit, Ceiling of 15Mbit, DEST 10.0.0.137
—class 20 (157): Guaranteed 5Mbit, Ceiling of 15Mbit, DEST 10.0.0.157
—class 30 (133): Guaranteed 1300Kbit, Ceiling of 15Mbit, DEST 10.0.0.133
—class 40 (132): Guaranteed 600Kbit, Ceiling of 15Mbit, DEST 10.0.0.132
—class 50 (158): Guaranteed 600Kbit, Ceiling of 15Mbit, DEST 10.0.0.158
—class 60 (144): Guaranteed 1Mbit, Ceiling of 15Mbit, DEST 10.0.0.144
—class 70 (136): Guaranteed 500Kbit, Ceiling of 15Mbit, DEST 10.0.0.136
—class 90 (default): Guaranteed 2Mbit, Ceiling of 15Mbit

An example based on this configuration is that of an FTP server located at 10.0.0.137. It has a guaranteed 1Mbit up and 4Mbit down. (It is mostly used for other people to send files up to us.) It can however receive or send information at up to 15Mbit, should none of the other systems on the subnet be consuming their allotments.

Notes:
The rate limits for each network card are set at 15Mbit total. That rate limit will affect both upstream and downstream traffic on each NIC. While I am only defining upstream caps on my eth0 NIC and downstream caps on my eth1 NIC, this configuration effectively limits my system to 15Mbit half duplex. This is by design. I want to be able to send at 15Mbit upstream or receive information at 15Mbit upstream, but I also do not want my combined upstream and downstream to surpass 15Mbit. It is a quirk of how I am billed (95th percentile of half-duplex consumed throughput.)

Additionally, of a possible usable 29 IP addresses in this subnet, only 7 are explicitly defined in the bandwidth shaping rules above. Any servers located on other IPs within the subnet would fall under the “default” rule. This allows me to do three important things:

1) Guarantee traffic to specific computers within my subnet.
2) Cap the total bandwidth consumed by all computers to 15Mbit.
3) Force all systems not explicitly defined to obtain throughput by contention.

There you have it:  a dumbed down overview of a very basic HTB shaping setup using the HTB.init script

Cascading Webmin Groups

Tags: ,

When trying to delegate modules to users in Webmin we are sadly limited by the ability to add users to only one Webmin group at a time.  Fortunately, one can cascade groups in order to work around this limitation.  Groups can function as a superset of one another as per the following example:

I require Alice, Bob and Cassandra to be able to alter SpamAssassin’s configuration.  I can create a Webmin group called SpamAssassin_Users with access to that module and add all three users to it.  Should I require Bob and Cassandra to additionally have access to the System Logs module, I can create a second group called System_Logs_Users with access to that module.  I can make System_Logs_Users a member of the SpamAssassin_Users group and then make both Bob and Cassandra a member of System_Logs_Users.  Bob and Cassandra now have access to both SpamAssassin and System Logs whilst Alice is limited to being able to Manage SpamAsassin.

Thus simple kludge can save you a lot of time when you finally sit down to delegate modules, but if you use this trick pay close attention to group permission inheritance!

El Reg Blog Articles: “DNS, Malware and You”

Tags: , , ,

This group of articles is all about DNS and Malware.  (Thoguh SPAM hangs off of it too.)  Interesting for server admins.

Blackhole your malware
Malware protection for the rest of us
It’s time to presume the web is guilty

El Reg Blog Articles: “Browser Security”

Tags: , ,

While No longer writing articles in fixed sets of three, I do still tend to write clumps of articles with a common theme.  It’s usually because I write articles based upon what I am working on at the time.  This set of articles is based on browser security.  Frankly, I think they are critical for everyone to read.  Practice good Internet hygiene!

Ditch the malware magnet
Private lessons
Nothing suceeds like XSS

A simple spam server

Tags: , , , , , ,

I can’t afford a really pricy third-party spam filtering option.  GFI, Symantec, even Microsoft offer up some pretty robust solutions.  They are pricy though, and I don’t see why I should bother fighting that particular funding war when there are some easy solutions available for free.  In my particular environment, I run an Exchange 2010 server front-ended by a CentOS box running Sendmail, SpamAssassin, ClamAV and a few others.

The first and most important thing is to of course go get the latest and greatest CentOS.  As of the time of this write-up that would be CentOS 5.5.  Toss it in a virtual machine and install it with nothing but the bare bones.  In my case, I gave it two interfaces; one directly externally accessible, and the other on my local LAN.  (I trust iptables to keep the baddies out as much as I do any other firewall, so I see little reason to hide the spam server behind a separate firewall and port forward.)  Let’s get to the build.

0) Set up your IP addressing according to your own internal schema.  Pointing the spamserver at your internal DNS (probably your domain controller) saves you having to build extensive hosts files on the spam server.  (It will be talking to your active directory, so using your AD’s DNS is a good plan.)

1) Enable the RPMforge repo.  (https://rpmrepo.org/RPMforge/Using) I use this for the simple reason that they have a tendency to keep ClamAV significantly more up-to-date than RedHat (and thus CentOS) do.  If you don’t use RPMforge, eventually ClamAV will get so out of date it will refuse to download new definitions.  Save yourself the aggravation; use RPMforge.  (I tend to wget the latest rpm, then “yum install [rpm name] –nogpgpcheck”.  This is because CentOS doesn’t natively have RPMforge’s key available, and RPMforge keeps changing the location on their site where they store the rpm installer for the key…)

2) Install the necessary software: yum install procmail sendmail sendmail-cf sendmail-milter clam* spamass* pyzor perl-Razor-Agent

3) Download and install Webmin: RPMs are available, and certainly work well enough.

4) Disable SELinux and allow ports 10000 and 25 through the firewall, as this is what centos works on.  You can usually do this from the command line via system-config-securitylevel on a base CentOS install.  Don’t forget to restart the system after disabling SELinux!  I know that there are ways around disabling SELinux, but frankly I’m too lazy to futz with the thing.  (At some point in the future I will figure out how to get SpamAssassin and ClamAV working with SELinux enabled.)

5) Create a user called Sendmail in your Active Directory under to OU “users.”

6) Save the password for this user in a file on the spam server.  I used /etc/mail/ldap.secret

7) Log into Webmin, and under servers go to “Sendmail Mail Server.”

The following is what we are going to need to modify to get Sendmail to use ClamAV and SpamAssassin.  It will also be set up to talk to your domain controller in order to look up users when a server attempts to deliver mail.  In this way the Sendmail server will be able to reject recipients who don’t exist in your organization.  (Thus avoiding a truckload of NDRs from your exchange server.)

1) Under Webmin -> Servers -> Sendmail Mail Server -> Domain Routing (mailertable)

The mailertable tells Sendmail where to send e-mail it receives for a given domain.  In the example below, domain1.com and domain2.com are being redirected to internalmailserver.company.local.  To achieve this, click on “manually edit /etc/mail/mailertable.”  Update it to suit your configuration.

Mailertable example:
domain1.com smtp:internalmailserver.company.local
domain2.com smtp:internalmailserver.company.local

2) Under Webmin -> Servers -> Sendmail Mail Server -> Spam control (access)

This file contains a list of servers allowed to use your spam server as a relay.  While e-mail relays are generally a very bad plan, in this case they are an excellent way to scan all your outbound company e-mail.  Enter the internal IP address of your exchange server (and any other e-mail sending systems) in your organizations here.  You can then configure them to treat your spam server as a “smart host,” thus providing antiviral and antispam scanning for all outbound e-mail traffic.  To achieve this, click on “manually edit /etc/mail/access.”  Update it to suit your configuration.

Acceslist example:
172.16.0.30 RELAY
mail.internalmailserver.company.local RELAY

3) Under Webmin -> Servers -> Sendmail Mail Server -> Relay Domains
Enter a list (separated by carriage returns) of all domains that you will be handling internally and which you wish to pass through this spam server.

Relay Domains example:
Domain1.com
Domain2.com

4) Under Webmin -> Servers -> Sendmail Mail Server -> Sendmail M4 Configuration

This is the heart of configuring Sendmail.  Most of the default configuration provided by CentOS 5.5 is good, but we need to add a few goodies to get it working the way we want it.

The first and most important thing is the setting LOCAL_DOMAIN(`’).  There is a big push right now by e-mail administrators the world over to require reverse DNS.  The long story short is that the hostname of your spam server (as your incoming and outgoing mail point) absolutely must match the reverse DNS of the IP address assigned to it.  That reverse DNS also needs to contain the word “mail.”  So the hostname of your spamserver should be something akin to mail.domain.com, and the reverse DNS on your external IP address provided you by your ISP should also read mail.domain.com.

In this vein, it is a good idea to set LOCAL_DOMAIN(`’) to LOCAL_DOMAIN(`mail.domain.com’).  This means your spamserver would always accept mail for “mail.domain.com” without forwarding it to your exchange server (an odd requirement that some e-mail administrators have begun to put into place.)  It still allows you to forward mail bound for domain.com internally.
Keep an out for this command: DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA’).  Toss a dnl #  in front of it if you want your sendmail to listen on any addresses other than 127.0.0.1!

I also tend to dnl # out EXPOSED_USER(`root’) and FEATURE(`accept_unresolvable_domains’) for sanity reasons.

The rest of the commands I won’t go into too much detail on; if you are really curious there is plenty of documentation available online as to their specific functions.  If you are reading this page, I trust you are capable of spotting where in the configuration you should be changing “domain.com” and “company.local” style commands to suit your configuration.

FEATURE(`greet_pause’)dnl
define(`LUSER_RELAY’,`error:5.1.1:”550 User unknown”‘)dnl
INPUT_MAIL_FILTER(`clamav-milter’, `S=/var/clamav/clmilter.socket, T=S:4m;R:4m’)dnl
INPUT_MAIL_FILTER(`spamassassin’, `S=:/var/run/spamass.sock, F=,T=C:15m;S:4m;R:4m;E:10m’)dnl
define(`confINPUT_MAIL_FILTERS’, `clamav-milter,spamassassin’)dnl
define(`confDOUBLE_BOUNCE_ADDRESS’,`’)dnl
FEATURE(`ldap_routing’,, `ldap -1 -T<TMPF> -v mail -k proxyAddresses=SMTP:%0′, `bounce’)dnl
LDAPROUTE_DOMAIN(`domain1.com’)dnl
LDAPROUTE_DOMAIN(`domain2.com’)dnl
define(`confLDAP_DEFAULT_SPEC’,`-h “domaincontroller.company.local” -d “CN=sendmail,CN=Users,DC=company,DC=local” -M simple -P /etc/mail/ldap-secret -b “DC=company,DC=local”‘)dnl

Once you have finished this, go save and rebuild the Sendmail configuration.  It’s a good plan to restart Sendmail at this point to see if it blows up.  Remember that Sendmail is really grouchy if you have an extra carriage return, or forget a ` or a ‘.

For SpamAssassin configuration, first go to Webmin -> Servers -> SpamAssassin Mail Filter -> Setup Procmail For SpamAssassin and enable SpamAssassin.

Next stop is Webmin -> Servers -> SpamAssassin Mail Filter and modify to your heart’s desire.  I generally change the setting “Prepend text to Subject: header” to read [SPAM ASSASSIN DETECTED SPAM].  This then allows me to set either an Outlook rule or an Exchange -> Hub Transport -> Transport rule.

In the case of a local Outlook rule each client must be individually configured to deal with the [SPAM ASSASSIN DETECTED SPAM] in the subject line of “spam” e-mails.  (I usually have them directed to the “Junk-Email” folder.)

In the case of an Exchange -> Hub Transport -> Transport rule, I usually set exchange to assign anything with [SPAM ASSASSIN DETECTED SPAM] in the subject line to a Spam Confidence Level (SCL) of 7.  If  you want to enable SCL junk filtering and set your own SCL levels, you will need some Exchange PowerShell commands.  Google can tell you more.  http://msexchangeteam.com/archive/2009/11/13/453205.aspx is a good article to read as well.

Set-ContentFilterConfig -SCLDeleteEnabled $true -SCLDeleteThreshold 9
Set-ContentFilterConfig -SCLRejectEnabled $true -SCLRejectThreshold 8
Set-OrganizationConfig -SCLJunkEnabled $true -SCLJunkThreshold 7

Go to Exchange -> Hub Transport -> Anti-Spam -> Content Filtering.  Enable it, and uncheck any boxes except “Delete Messages that have an SCL greater than or equal to.”  The rationale behind this is that the SpamAssassin server is doing all the heavy filtering.  If you allow Exchange to reject mails, you are going to end up with a mess of rejection NDRs that will pile up and go nowhere.  Similarly, under Exchange -> Hub Transport -> Remote Domains -> Default (*) I really recommend disabling non-delivery reports.  There is a growing trend amongst email administrators to not accept mail from domains that send NDRs, as NDRs are being used by spammers as a vector to get spam into people’s e-mail boxes.

run freshclam and sa-update from the command line to get ClamAV and SpamAssassin updated to the latest definitions.

Go into Webmin -> System -> Bootup and Shutdown.  Make sure important things like ClamAV-Milter, SpamAssassin and Sendmail are all set to start on boot (and are currently running.)

That’s it!  If you’ve done it right, then you should now have a CentOS box capable of receiving e-mail from the internet, scanning it for viruses and Spam, and forwarding it on to your exchange server.  The exchange server itself can be configured with junk-filtering properties, adding a second layer of protection.  (Though in truth I’ve not needed it: SpamAssassin does the job just fine, and better than Exchange’s native capabilities.)

RDP and barcode scanners

Tags: ,

On some machines, you may find that devices (such as barcode scanners) that fire keyboard inputs in rapid succession to an RDP session are corrupted.  This occurs particularly when trying to use these devices in combination with a JavaScript or CSS-heavy webpage in certain browsers.

The workaround for now is to change the RDP settings on the connecting computer.  In your RDP client do the following:

Select “Options”
Go to “Local Resources”
Under “Keyboard” set “Apply windows key combinations” to “On this computer.”

First Post!

Tags:

Okay, it’s not a first post in the traditional sense. It is however the first post made entirely from my blackberry. I discovered a wordpress application in the Blackberry e-store, and simply had to try it out.

It seems to work well, and frankly I think that it opens up a whole new world of semi-productive ways to waste time. Less solitaire, more articles. Both my readers will be thrilled.

Anyway, from a pure technology standpoint, I rather like the application. It integrates well with wordpress, allows geotagging, picture upload and by default sets you articles as “phone drafts.”. I know a few companies using wordpress as PR blogs…this may be worth a look for them. It even has spellcheck.

An interesting thing to note is that if you attempt to make a geotagged post from a location where you Blackberry cannot get a GPS fix, it will simply refuse to submit the post.

With a little variation, something like this could be used as an interesting (and cheap) geotagging business tool. Set up a wordpress blog for internal use at a delivery service. Create templates for “delivered,” “no one home,” etc. Simply click a few buttons and you can update your info centrally.

I am sure more models exist…but unlike twitter and similar limited formats, I can see some real potential in tools like this.

Redhat-based thin client tips

Tags: , ,

All of the below tips are for Redhat-based systems.  They are really minor, easily searchable items that I have found useful to remember.  I have found them of value when configuring Redhat/Fedora systems as thin clients.   We use Fedora 10 and CentOS 5, usually to turn old hardware into something that will RDP to a Windows virtual machine.  The people using them don’t want to know how it works, just that when they double click on the icon, they get a windows desktop.  The tips here make such a configuration easier to administer.

System won’t show it’s hostname in a Windows-powered DHCP:

Edit /etc/sysconfig/network-scripts/ifcfg-eth0  (Or eth1, eth2, etc.)
Add the line DHCP_HOSTNAME = [system_hotname]  (Without the brackets.)

Have gnome auto-logon a non-root user:

Edit /etc/gdm/custom.conf
Add the following lines:

[daemon]
TimedLoginEnable = true
TimedLogin = [username]  (Without the brackets.)
TimedLoginDelay = 0  (Salt to taste.)

Exclude a package from yum updates:

In this example, I exclude tsclient, because I prefer the version 1 client to version 2.
(This is because version 2 forces you to store an RDP password, which is terrible for our thin-client purposes.)

Edit /etc/yum.conf
Add the line exclude = tsclient

Note: You can search older RPMs on http://rpm.pbone.net/

Set up vino in gnome so that you can remote administer a system.
(This is a good configuration for auto-logged-on thin clients.)

yum install vnc vnc-server libvncserver vino

Go to System > Preferences > Internet & Network > Remote Desktop
General
– Allow others to view  (This enables vino.)
– Allow others to control  (So that you can manipulate the system.)
– Do not ask for confirmation  (Entirely up to you.)
– Require enter password  (Enter a password.)

© 2009 drink the sweet feeling of the colour zero. All Rights Reserved.

This blog is powered by the Wordpress platform and beach rentals.