Today, I was trying to find email messages that I had sent which haven't had any responses. My MUA is mutt and my mail is stored in Maildir folders. This could be done with a fairly simple shell script using find and grep, but with about 40,000 messages in the one folder I wanted to search, I was looking for at least a slightly more elegant solution.
The idea is to find messages I had sent (to a specific recipient in this case) and extract the Message-Id; then for each Message-Id, search for messages with that Message-Id in their In-Reply-To or References header. If no messages are found for this second search, we conclude that no response has been received.
I ended up writing a simple shell script that makes use of mairix, which indexes each of the three fields I need to search. Unfortunately, mairix doesn't allow searching the In-Reply-To or References headers directly, but does provide a way to return all of the messages within the same thread as messages returned as a search result. This allows us to restrict the number of messages searched in the second pass to a single thread instead of the entire mailbox.
See the script below:
tech » mail | Permanent Link
I was recently asked to configure Exim to archive all mail sent and received by certain customers. Users authenticate to send mail using their email address so I used a domainlist to specify which domains' users should have their mail archived.
domainlist archive_domains = example.com
I created two routers
and a transport for handling the mail sent by authenticated users. The first
is a redirect
router which rewrites the recipient address to a special address containing
the sender's address, e.g. _!%#archive#%!_-user@example.com. This router has
the unseen
option set so the message is routed to the original
recipient as usual. This doubles the number of recipients, but Exim discards
duplicates so the final recipients are the original recipients plus the
sender's archive copy. The second router strips the _!%#archive#%!_- prefix
and delivers to the message to the sender's archive mailbox using a special
transport.
These routers should probably be the first two since you don't want another router to accept delivery of the message first.
archive_by_sender_rewrite: driver = redirect condition = ${if and { {def:authenticated_id}{match_domain{${domain:$authenticated_id}}{+archive_domains}} }{yes}{no}} data = _!%#archive#%!_-$authenticated_id unseen no_repeat_use no_verify archive_by_sender: local_part_prefix = _!%#archive#%!_- driver = accept no_verify transport=archive_by_sender
Because $authenticated_id
is used to get the sender's address, you
should have server_set_id = $1
in your authenticators
so the variable gets set.
The router to archive recieved mail is pretty simple. It uses the
unseen
option again to create a copy of the message, and like
archive_by_sender
uses a separate transport to archive the
message. This router should be placed before any routers that accept mail for
the +archive_domains
. If you use routers to discard or quarantine
spam, this one should be before those if you want to archive the spam received.
archive_by_recipient: driver = accept domains = +archive_domains unseen no_verify transport=archive_by_recipient
Here are the transports. The messages are written to maildir directories. Any missing directories will be created if Exim has permission to create them.
archive_by_sender: driver = appendfile maildir_format mode = 0600 mode_fail_narrower = false envelope_to_add = true return_path_add = true create_directory directory = /path/to/archive/$domain/$local_part/sent archive_by_recipient: driver = appendfile maildir_format mode = 0600 mode_fail_narrower = false envelope_to_add = true return_path_add = true create_directory directory = /path/to/archive/$domain/$local_part/received
If you don't have the default rule in your rcpt acl to reject local parts
contains %, !, etc., you should make sure you don't accept mail for the special
archive user address. Safeguarding against malicious users with shell access
is left as an exercise for the reader. (Hint: I would probably look at
$received_protocol
.)
tech » mail | Permanent Link
Looking at my web stats, I saw a few visitors from Google who were looking for information on how to get old messages onto a Blackberry. Since I didn't actually have that information, I'll add it here.
Suppose you have some old messages in your IMAP inbox which have been purged from your Blackberry because they are too old, and you would like to get them back so you can reply while on the subway. Just copy them from your inbox back to your inbox. You can do this in a single action with mutt--using save will copy the messages and delete the originals. But Thunderbird doesn't seem to allow you to create a copy of messages within a folder so you will have to move (copy and delete original) the messages to another folder, and then copy them back to your inbox.
The copies will then look like new messages to the Blackberry service, and your Blackberry will download them. If, within your non-mobile MUA, your sort your inbox by date or date-then-thread (mutt's threaded mode), your inbox should appear the same as before making the copies, but if you sort your messages in the order they exist in the IMAP store (including Thunderbird's threaded mode), the copies will appear at the end (or beginning, depending on your sorting direction) of your inbox.
tech » mail | Permanent Link
Denisa decided she doesn't want to feel compelled to check her email all day long, and asked me if there was a way to restrict the hours during which she could receive email. Since I use the magical MTA that is Exim, I was sure this must be possible. While I couldn't find explicit support for such a feature, I was able to hack something out. Here's my new local_delivery transport:
local_delivery: driver = appendfile envelope_to_add file = /var/spool/mail/${local_part} group = mail mode = 0660 no_mode_fail_narrower return_path_add # hack to queue messages during certain hours message_size_limit = ${if ! and {\ {match_local_part{$local_part}{+time_restricted_users}} \ {or {{<{${substr_11_2:$tod_log}}{21}}{>={${substr_11_2:$tod_log}}{22}}}} \ }{0}fail}
The trick I used was to force expansion failure of the message_size_limit when delivering a message to her address and when the current time matches our constraints, in this case before 9pm or after 10pm. The expansion failure causes the message to be queued. To ensure that she actually gets her queued messages during that one hour window, I added a new retry rule for our domain that retries every 15 minutes for four days, rather than the default rule which increases the interval between delivery attempts as the time on the queue increases. In case I ever want to configure other accounts similarly, I set up a localpartlist named time_restricted_users.
tech » mail | Permanent Link
When I upgraded my mail server from Woody to Sarge, Postman, my webmail client, stopped working. When trying to log in, I got the following error:
It turns out that the newer version of C-client, the library that postman uses for IMAP, automatically tries to verify the certificate, even if you have postman configured to connect to the non-SSL port; I guess it calls STARTTLS. Since I'm only using a self-signed certificate, I get the error above.
The solution is to configure postman to not verify the certificate using the
novalidate-cert
switch in /etc/postman/interdaemon.cfg
.
[mail.xerus.org] imapserver = mail.xerus.org/novalidate-cert imapport = 143 smtpserver = localhost ;for SMTP authentication. 0=No,1=Must,2=Try authsmtp = 0 maildomain = xerus.org mailboxprefix = remotepath = ~/mail/ deniedservices =
tech » mail | Permanent Link
There are a growing number of spammers exploiting PHP scripts to send spam. Such scripts are often simple "Contact Us" forms which use PHP's mail() function. When using the mail() function, it is important to validate any input coming from the user before passing it to the mail() function.
For example, consider the following simple script.
<?php $to = 'info@example.com'; $subject = 'Contact Us Submission'; $sender = $_POST['sender']; $message = $_POST['message']; $mailMessage = "The following message was received from $sender.\n\n$message"; mail($to, $subject, $mailMessage, "From: $sender"); ?>
Such a script looks fairly innocuous. The problem is that sender variable sent from the client is not sanitized. By manipulating the value sent in the sender variable, a malicous spammer could cause this script to send messages to anyone.
Here's an example of how such an attack might be carried out.
curl -d sender="spammer@example.com%0D%0ABcc: victim@example.com" \
-d message="Get a mortgage!" http://www.example.com/contact.php
Now, in addition to being sent to info@example.com, the message will also be
sent to victim@example.com.
The solution to this problem is to either not set extra headers when using
mail(), or to sanitize all data being sent in these headers. A simple example
would be to strip out all whitespace from the sender's address.
$sender = preg_replace('~\s~', '', $_POST['sender']);
A more sophisticated approach might be to use PEAR's Mail_RFC822::parseAddressList()
to validate the address.
tech » mail | Permanent Link
At customer request, we're going to start offering outbound SMTP service to Postica customers. Doing so requires a much greater guarantee of availability than is required when only accepting mail from other MTAs. MTAs are able to use multiple MX records when attempting to deliver mail, and will queue mail if none of the MX hosts are available. MUAs, on the other hand, can generally only be configured with a single hostname to use as the SMTP server for outbound mail, and tend to show the user an unpleasant error message if there is a problem connecting to the SMTP server.
To provide high-availability, load-balanced SMTP service, I decided to use round-robin DNS in combination with CARP, the UCARP implementation specifically. CARP is a protocol for supporting failover of an IP address, very similar to VRRP.
I installed the Debian ucarp
package on two servers. Each server is the preferred server for one
ucarp-managed IP address and the backup for the other; smtp.postica.net
points to both addresses. I also installed the iputils-arping package
which is used to send gratuitous arps when the IP address moves to a new server
thus causing the MAC address to change. Note that the arping
program in the iputils-arping
package is different than the one in
the arping package.
I added two up
options to /etc/network/interfaces
on
each server to start one ucarp process for each IP address when the physical
interface to which the ucarp addresses are bound is brought up.
auto eth0 iface eth0 inet static address 192.168.1.101 netmask 255.255.255.0 gateway 192.168.1.1 up ucarp -i eth0 -s 192.168.1.101 -v 201 -p secretPassword -a 192.168.1.201 \ --upscript=/etc/ucarp/vip-201-up.sh --downscript=/etc/ucarp/vip-201-down.sh -P \ -z -k 10 --daemonize up ucarp -i eth0 -s 192.168.1.101 -v 202 -p secretPassword -a 192.168.1.202 \ --upscript=/etc/ucarp/vip-202-up.sh --downscript=/etc/ucarp/vip-202-down.sh -P \ -z -k 0 --daemonize down pkill ucarp
The interfaces
file is essentially the same on the second server,
but the values of -k
arguments, the advertisement skew which
determines priority, are swapped. If you were running ucarp on multiple
interfaces, you probably wouldn't want to kill all ucarp processes when
bringing an interface down; you might want to use start-stop-daemon with
--make-pidfile
and --background
instead of using ucarp's
--daemonize
option.
The --upscript
and --downscript
arguments tell ucarp
what scripts to run when taking over or releasing an IP address, respectively.
Here's an example of each:
#! /bin/sh exec 2> /dev/null /sbin/ip addr add 192.168.1.201/24 dev "$1" start-stop-daemon --start --pidfile /var/run/ucarp-arping.192.168.1.201 \ --make-pidfile --background --exec /usr/sbin/arping -- -q -U 192.168.1.201
#! /bin/sh exec 2> /dev/null /sbin/ip addr del 192.168.1.201/24 dev "$1" start-stop-daemon --stop --pidfile /var/run/ucarp-arping.192.168.1.201 \ --exec /usr/sbin/arping rm /var/run/ucarp-arping.192.168.1.201
In theory, it should only be necessary to send a single (or maybe a couple)
gratuitous arp. I had a problem when using vrrpd, though, in which the backup host
would briefly become the master, the arp table on the router would get updated
with the MAC address of the new master, then it would go back to being backup.
During this period, the other host would think it was the master the entire
time, and so would not send any arp updates making the IP address unreachable
until the router's arp table was updated. I don't know if this could occur
using CARP, but I prefer to play it safe and have the master continue to send
unsolicited arps by using start-stop-daemon
to spawn a
long-running arping
process.
In summary, round-robin DNS is used to balance the load across the two servers, and in the event that one of the servers goes down, both IP addresses will be handled by a single server.
tech » mail | Permanent Link
It looks like mutt development is starting to pickup again. For those not familiar with it, mutt is the best email client out there. Development has forked and there is a new mutt-ng project. Kyle Rankin has written up a little summary. So far, it's mostly just integration of many of the third-party patches that have been available for a while. Since the Debian package includes many of these patches already, including one of the most important, header caching, that's not too exciting.
Two of the new features included in mutt-ng are a sidebar and nntp support.
The sidebar is similar to those in most gui mail readers which shows the number
of messages in the folders in your mailboxes
. With
pager_index_lines
set, mutt basically looks like a text-mode
version of the common three-pane interface in most gui clients. I'll probably
unsubscribe from the exim-users mailing list once the newsreader works since I
can just read the gmane group. Right now,
trying to read a usenet message causes a segfault unfortunately.
mutt-ng seems a bit slower too. Returning from the pager to the index takes an extra second or so.
Debian packages for sid are here:
deb http://people.debian.org/~nobse/debian/ unstable/
tech » mail | Permanent Link
I spent a few hours today researching Exchange replacements. These are products that are designed to replace Microsoft Exchange on the server, but still allow use of Outlook as a client, including the much-beloved calendaring features.
Here's what I came up with.
Date: Tue, 16 Nov 2004 16:24:55 -0800 From: "Christian G. Warden" <cwarden@zerolag.com> Subject: Exchange replacement analysis - phase 1 There are a handful of products that claim to be Exchange replacements. They all work in the same manner, using a custom MAPI connector, which is basically a plug-in for Outlook, to access the server. Each version of Outlook has different features so most of these products only work with certain versions of Outlook. Because I'm not very familiar with Outlook, it is difficult for me to tell if these products fully support the features of Outlook. We'll need to setup a test environment to fully evaluate any of these products. OpenGroupware[1] This was previously a closed source server that was open sourced a couple years ago. I evaluated it briefly a year or so ago, and it seemed stable and featureful, but had a bit of a clunky web interface. It looks like development is pretty active, though. I haven't evaluated the Outlook connector, ZideLook[2], which is a commercial product which costs about $50 per client. There is no demo of ZideLook available. ZideLook communicates with OpenGroupware using WebDAV. OpenGroupware just handles the groupware functionality and integrates with third-party IMAP servers. 1. http://www.opengroupware.org/en/index.html 2. http://esd.element5.com/product.html?cart=1&productid=517934&languageid=1&nolselection=1¤cies=EUR SUSE LINUX Openexchange Server[3] This is a commercial product. It is distributed a full linux distribution and cannot be installed on an existing Linux system. (Such an installation would not be supported at least.) Pricing is unclear. The product is supposed to be available for purchase online at novell.com, but isn't, perhaps because they are currently integrating the product with Novell's Groupwise. There is an online demo[4] and the Outlook connector is available for download[5]. Openexchange is made up of a number of open source components and comFire, the groupware component, which was licensed from a company called Netline. comFire has recently been open sourced by Netline as Open-Xchange[6], but the Outlook connector is not licensed for use with Open-Xchange. The Outlook connector communicates with the server using WebDAV. There is a good article about Openexchange[7]. 3. http://www.suse.com/us/business/products/openexchange/index.html 4. http://www.suse.com/us/business/products/openexchange/demo.html 5. http://www.suse.com/us/business/products/openexchange/download.html 6. http://mirror.open-xchange.org/ox/EN/product/ 7. http://www.linux-magazine.com/issue/48/Suse_Linux_Openexchange_41.pdf Bynari Insight Server[8] and Insight Connector[9] I believe Bynari was the first company with an "Exchange replacement on Linux" product. Their Outlook connector allows calendars and address books on an IMAP server. It claims to require the Insight Server, though Insight Server uses Cyrus as the IMAP server, so it may work with a normal Cyrus server. Insight Server is composed of a number of open source products such as Postfix, OpenLDAP, and Apache. Bynari seems to think most of the value is in the Connector since a 1000 user license for Insight Connector is $17,000, and a 1000 user license for a bundled Insight Server and Insight Connector is $18,000. (Insight Server without the Connector is also sold for $1,000.) A demo is available. 8. http://www.bynari.net/index.php?id=1169 9. http://www.bynari.net/index.php?id=7 BILL Workgroup Server[10]/Exchange4Linux[11] Documentation is kind of spotty on this one. I don't think it's worth evaluating except as a last resort. 10. http://www.billworkgroup.org/billworkgroup/home 11. http://www.exchange4linux.com/exchange4linux/Home None of the Above (IMAP/LDAP/SMTP/WebDAV or FTP) Depending on the customer's needs, perhaps Outlook in "Internet Mail Mode" will be sufficient. IMAP supports shared folders, but I don't know if it supports setting ACLs. Outlook also supports LDAP for address books, but I don't know if supports updating the directory. Outlook can send meeting requests and responses over email and publish free/busy time over FTP (and, I think, either WebDAV or HTTP PUT), but I don't know if this would meet the customer's needs. I recommend trying out Openexchange first as it seems to be the most open and widely deployed. Christian
Comments from anyone who has deployed one of these products for use with Outlook would be appreciated.
tech » mail | Permanent Link
I've been using sender address verification callbacks for a long time. It helps eliminate a lot of spam by checking if the sender's address is deliverable. Unfortunately, there are a number of systems that send mail with an invalid envelope sender. These are often generated by scripts on a web server where the sender defaults to the-apache-user@the.web.server.name. There are also a number of misconfigured mail server, mostly IMail installations, that do not accept messages with null senders. This not only prevents their users from receiving bounce messages, but also prevents sender address verification from working.
Up until yesterday, I've rejected messages at RCPT time that fail sender address verification. Trying to deal with the number of false positives for a significant number of users has proven to be too dificult. So I decided to continue using sender address verification, but incorporate the result into an overall SpamAssassin score.
Andrew, on the exim-users list provided a helpful Exim ACL snippet which I modified a bit and came up with the following:
acl_callout_test: warn set acl_m6 = TEMP accept verify = sender/callout=60s,random set acl_m6 = OK warn set acl_m6 = FAIL acl_check_rcpt: warn acl = acl_callout_test warn message = X-Sender-Verification: $acl_m6
This adds an X-Sender-Verification header which I then check for in SpamAssassin.
header POSTICA_SENDER_ADDRESS_FAIL X-Sender-Verification =~ /FAIL/ describe POSTICA_SENDER_ADDRESS_FAIL Sender Address Verification Failure score POSTICA_SENDER_ADDRESS_FAIL 2.0 header POSTICA_SENDER_ADDRESS_TEMPFAIL X-Sender-Verification =~ /TEMP/ describe POSTICA_SENDER_ADDRESS_TEMPFAIL Sender Address Verification Temp Failure score POSTICA_SENDER_ADDRESS_TEMPFAIL 1.0
I may have to tweak the scores, but so far, it's working pretty well.
tech » mail | Permanent Link
The state is that great fiction by which everyone tries to live at the expense of everyone else. - Frederic Bastiat