PDA

View Full Version : Performance measurements ?



Niek Jongerius
2005-10-13, 05:12
Hi,

Lately I am pretty swamped in mail, so forgive me if this issue has
already been addressed in another way.

In the ongoing discussions about performance issues there is often
little hard data, but mostly "it's plenty fast", or "it's slooow".
I have cobbled up a simple C proggy that takes a text file with a
few CLI commands, and fires it at the SlimServer, timing how long it
takes for the CLI to respond.

This data has to be taken for what it is: a very imprecise, but
possibly useful ballpark figure about how fast SlimServer responds.
By using a text file for the commands, this tool is pretty flexible
in what it executes. The command line syntax:

sstime <SlimServer_IP> <CLI_port> <inputfile>

An example of the contents of an input file:

info total artists ? |Artists:|19
info total albums ? |Albums :|18
info total songs ? |Songs :|17
info total genres ? |Genres :|18
titles 0 10
titles 0 100
titles 0 1000
titles 0 10000

The first part upto the optional '|' is the CLI command. After the first
'|' a comment can be used, and after the second '|' you can specify a
char index in the result from which point on the result will be reported.
If there is no comment specified, the command executed will be shown in
the output (hopefully this makes sense).

The output of this file on my laptop (the starting value is the time in
seconds it took to execute the command):

[0.174] Artists: [241]
[0.004] Albums : [666]
[0.002] Songs : [6795]
[0.003] Genres : [47]
[0.088] titles 0 10
[0.369] titles 0 100
[3.868] titles 0 1000
[24.201] titles 0 10000

Linux binary (compiled on SuSE 9.3):

http://media.qwertyboy.org/files/sstime.bin

Windows binary (compiled with Studio .NET, probably requires MS Framework
to run, XP and W2K3 should be fine):

http://media.qwertyboy.org/files/sstime.exe

Niek.

mherger
2005-10-13, 06:03
> This data has to be taken for what it is: a very imprecise, but
> possibly useful ballpark figure about how fast SlimServer responds.

Quite true. I got interesting results (connecting to my server 150km
further southwest using SSH :-)):

[0.375] Artists: 433
[0.032] Albums : 544
[0.031] Songs : 6543
[0.031] Genres : 50
[20.891] titles 0 10
[16.235] titles 0 100
[29.329] titles 0 1000
[114.393] titles 0 10000

Interestingly "titles 0 10" was slower than "titles 0 100" three out of
four times I run the test. (BTW: what does that command actually do?)

This is on a Via C3/1GHz/512MB running SME Linux.

> The output of this file on my laptop (the starting value is the time in
> seconds it took to execute the command):

Maybe you should give some numbers about you configuration (CPU, RAM).

--

Michael

-----------------------------------------------------------
Help translate SlimServer by using the
SlimString Translation Helper (http://www.herger.net/slim/)

Niek Jongerius
2005-10-13, 08:11
> Quite true. I got interesting results (connecting to my server 150km
> further southwest using SSH :-)):
>
> [0.375] Artists: 433
> [0.032] Albums : 544
> [0.031] Songs : 6543
> [0.031] Genres : 50
> [20.891] titles 0 10
> [16.235] titles 0 100
> [29.329] titles 0 1000
> [114.393] titles 0 10000
>
> Interestingly "titles 0 10" was slower than "titles 0 100" three out of
> four times I run the test. (BTW: what does that command actually do?)

According to the CLI tech docs, it returns songs (with the first number
as the start number, and the second one the maximum number of titles
returned, so "titles 0 10" gives you the first 10 songs).

> This is on a Via C3/1GHz/512MB running SME Linux.
>
>> The output of this file on my laptop (the starting value is the time in
>> seconds it took to execute the command):
>
> Maybe you should give some numbers about you configuration (CPU, RAM).

This was on a P4 2.8 GHz with 1 GB RAM, running 5 instances of SlimServer.
The songs themselves are on an iPod, but that probably doesn't matter for
the times reported, as they interact with the database.

Niek.

Bill Burns
2005-10-13, 08:22
Took me a minute to find out that the CLI port is 9090!

Here's my results over local network. Slimserver on P4/3GHz/1GB/WinXP,
streaming an internet radio station to an SB1 during these tests with
nothing else running on the server.

[0.109] Artists: 503
[0.000] Albums : 818
[0.000] Songs : 7068
[0.000] Genres : 93
[47.063] titles 0 10
[12.500] titles 0 100
[21.453] titles 0 1000
[58.562] titles 0 10000

Second pass:

[9.875] titles 0 10
[10.609] titles 0 100
[27.500] titles 0 1000
[55.391] titles 0 10000

Third pass:

[10.047] titles 0 10
[14.328] titles 0 100
[20.125] titles 0 1000
[66.390] titles 0 10000

Here "titles 0 10" was slower only on the first pass.

--
Bill Burns
Long Island NY USA
mailto:billb (AT) ftldesign (DOT) com

MrC
2005-10-13, 08:51
Some food for though...



Second pass:
[55.391] titles 0 10000

Third pass:
[66.390] titles 0 10000

With a 20% variance, we can see that the testing methodology and environment is very rough.

And, with Niek running a P4/2.8 getting

[24.201] titles 0 10000

and Bill running a P4/3 getting an average of

[60.114] titles 0 10000

There's an almost 2.5x difference in times running on hardware where hardware specs alone would account roughly for only 10% difference.

Hopefully nobody will look at these data points and attempt to draw unwarranted conclusions.

wr420
2005-10-13, 09:03
Can you post the source? Need to run on Solaris.
Linux version errors. Maybe I'm doing something wrong?
I renamed settime.bin to settime.
Set execute permissions
ssettime is in my path
slimserver.pl is in my path as slimserver.pl and as slimserver
user can read input file.
If I were to run it from a windows or linux box and point it at my slim
server across the network would that effect the results?

-bash-3.00# sstime 192.168.1.35 9090 sstime.txt
-bash: /usr/bin/sstime: Invalid argument

Thanks

The information contained in this e-mail is strictly confidential and for the
intended use of the addressee only; it may also be legally privileged and/or
price sensitive. Notice is hereby given that any disclosure, use or copying
of the information by anyone other than the intended recipient is prohibited
and may be illegal. If you have received this message in error, please
notify the sender immediately by return e-mail. All e-mail sent to this
address will be received by Acacia Pacific Holding's e-mail system and is
subjected to archiving and review by someone other than the recipient.

Acacia Pacific Holdings has taken every reasonable precaution to ensure that
any attachment to this e-mail has been swept for viruses. We accept no
liability for any damage sustained as a result of software viruses and
advise you carry out your own virus checks before opening any attachment.

Philip Meyer
2005-10-13, 11:34
> sstime <SlimServer_IP> <CLI_port> <inputfile>
>
I can't get this to work under XP.

I get the following output:

[0.000] Artists:
Error reading from SlimServer!

I'm running 6.2 trunk.

Phil

mherger
2005-10-13, 11:37
>> sstime <SlimServer_IP> <CLI_port> <inputfile>
>>
> I can't get this to work under XP.

You'll have to open port 9090 on the server's firewall

--

Michael

-----------------------------------------------------------
Help translate SlimServer by using the
StringEditor Plugin (http://www.herger.net/slim/)

Philip Meyer
2005-10-13, 12:19
>You'll have to open port 9090 on the server's firewall
Windows firewall is disabled. I am running Sygate Personal Firewall, which lists port 9090 as allowed for perl.exe.

I can connect via telnet to localhost 9090, but as soon as I type anything and press return, I get a disconnect with no output.

Phil

mherger
2005-10-13, 12:48
>> You'll have to open port 9090 on the server's firewall
> Windows firewall is disabled. I am running Sygate Personal Firewall,
> which lists port 9090 as allowed for perl.exe.

It's not perl.exe which is accessing port 9090, but sstime.exe. Put the
firewall in learning mode (or disable it entirely to be sure).

> I can connect via telnet to localhost 9090, but as soon as I type
> anything and press return, I get a disconnect with no output.

I'd really say this is the firewall blocking access to this port.

--

Michael

-----------------------------------------------------------
Help translate SlimServer by using the
StringEditor Plugin (http://www.herger.net/slim/)

Michaelwagner
2005-10-13, 15:27
Yeah, you really need to be careful about methodology here.

If you want to select typical things and get typical response times, you probably need to carefully think out the typical things people do at the user interface and mimic them in the CLI and run them on many different configurations.

I doubt many people list the top thousand songs when they're at the remote.

If you want to benchmark the code to do before and after studies of code improvements, you need to have one typical machine, benchmark it accurately AND THEN FREEZE IT AND DON'T CHANGE IT. That almost means dedicate it to the benchmarking task and making it a reference system, because you never know when installing Microsoft Office 2007 (heaven help us) or IE 17.2 won't change the way I/O works or how much background activity there is cluttering up the disk.

Niek Jongerius
2005-10-13, 23:00
>> Second pass:
>> [55.391] titles 0 10000
>>
>> Third pass:
>> [66.390] titles 0 10000
>>
> With a 20% variance, we can see that the testing methodology and
> environment is very rough.
>
> And, with Niek running a P4/2.8 getting
>
> [24.201] titles 0 10000
>
> and Bill running a P4/3 getting an average of
>
> [60.114] titles 0 10000
>
> There's an almost 2.5x difference in times running on hardware where
> hardware specs alone would account roughly for only 10% difference.
>
> Hopefully nobody will look at these data points and attempt to draw
> unwarranted conclusions.

Agreed. There are too many variables in the way machines are set up to
readily compare output numbers. CPU and RAM are by no means the only
variables here. OS, procs running, procs priority, intermediate network
(if test prog is run over a network) etc.

But bottom line still is that it takes that reported amount of time for
the SlimServer to cough up the data requested (assuming the CLI uses
comparable ways in getting the data). If Bill is getting 2.5 times worse
performance in the same tests as I get, I would assume his setup performs
about that factor worse than mine when serving a SqueezeBox. The proggy
does nothing fancy (I'll post the source in a minute on my site), it
just times the start and end of the CLI command.

I have not been very inventive in the queries I posed in my sample input
file. It could be that my example commands are somehow not representable
for gauging performance. Someone with a better understanding of what
actually are reasonable queries could maybe give a few. It's just a matter
of editing the input file to test other CLI commands.

Niek.

Niek Jongerius
2005-10-13, 23:08
> Can you post the source? Need to run on Solaris.

Wil do in a mo.

> Linux version errors. Maybe I'm doing something wrong?
> I renamed settime.bin to settime.
> Set execute permissions
> ssettime is in my path
> slimserver.pl is in my path as slimserver.pl and as slimserver
> user can read input file.
> If I were to run it from a windows or linux box and point it at my slim
> server across the network would that effect the results?

Well, it could affect times reported if the network is dog slow, but the
logics of the program should work. I've tested the Windows version over
a VMWare "network" on my machine.

> -bash-3.00# sstime 192.168.1.35 9090 sstime.txt
> -bash: /usr/bin/sstime: Invalid argument

The version you have has been compiled on a SuSE 9.3. It probably compiled
against dynamic libraries, so it assumes some version of runtime libs.
You could try running like so:

strace sstime 192.168.1.35 9090 sstime.txt

This may give an indication what went wrong. I'll post the source on my
site in a minute.

Niek.

Niek Jongerius
2005-10-13, 23:11
>>You'll have to open port 9090 on the server's firewall
> Windows firewall is disabled. I am running Sygate Personal Firewall,
> which lists port 9090 as allowed for perl.exe.
>
> I can connect via telnet to localhost 9090, but as soon as I type anything
> and press return, I get a disconnect with no output.

This is the reason sstime will not work either. It does exactly that.
If you cannot get a manual telnet to 9090 to work, the test program will
fail as well.

Niek.

Niek Jongerius
2005-10-13, 23:20
>
> Yeah, you really need to be careful about methodology here.
>
> If you want to select typical things and get typical response times,
> you probably need to carefully think out the typical things people do
> at the user interface and mimic them in the CLI and run them on many
> different configurations.
>
> I doubt many people list the top thousand songs when they're at the
> remote.
>
> If you want to benchmark the code to do before and after studies of
> code improvements, you need to have one typical machine, benchmark it
> accurately AND THEN FREEZE IT AND DON'T CHANGE IT. That almost means
> dedicate it to the benchmarking task and making it a reference system,
> because you never know when installing Microsoft Office 2007 (heaven
> help us) or IE 17.2 won't change the way I/O works or how much
> background activity there is cluttering up the disk.

Note that I did not intend this to be a benchmark tool. In a lot of posts
on this list people said the performance of their install was <insert your
favourite speed indication here>, which was a very subjective indication.
This program simply times a request to the CLI. It should give some idea
about what a user would see if using a real SqueezeBox (assuming we use a
reasonable set of CLI queries, which I probably don't).

There are even some of us desparately switching OSes and tweaking stuff
on the same machine and discussing whether ActiveState is faster than
compiled Windows or CygWin or whatever. This tool then is able to give
_some_ numbers. Again, you cannot compare them 1 on 1 to other installs,
but IMHO if one install does someting in 20 seconds, and another one in
just 2, there is going to be the same user experience when connecting and
using a SqueezeBox.

Niek.

mherger
2005-10-13, 23:57
> Agreed. There are too many variables in the way machines are set up to
> readily compare output numbers. CPU and RAM are by no means the only
> variables here. OS, procs running, procs priority, intermediate network
> (if test prog is run over a network) etc.

I was still surprised how my tests reflected the machine's category: times
always about doubled when testing a C3/600 (Linux), C3/1000 (Linux) and a
P4/2.66 (Windows). Though their software configurations are _very_
different.

--

Michael

-----------------------------------------------------------
Help translate SlimServer by using the
SlimString Translation Helper (http://www.herger.net/slim/)

Niek Jongerius
2005-10-14, 00:24
I've put up the source for the Linux version here:

http://media.qwertyboy.org/files/sstime.c

This source should compile on any modern Linux. Other *nix flavors maybe
need a bit of tweaking in header files etc. The Windows source needs some
more fiddling, I'll put up a source covering both platforms if there is
a need, but I doubt there are many users willing (or have the tools) to
compile under Windows.

Niek.

Michaelwagner
2005-10-14, 08:07
I can probably pull together an equivalent routine for windows systems that will work across all current windows platforms. But I won't be able to get to it until next week. Helping the spouse move her place of work this weekend.

It's a good idea, what this routine does. I didn't mean to be disparaging it in a previous post. It's just that we must realize what it does (and more importantly, does not) test.

But I think the idea would make an excellent test bed for regular performance regression testing. That is, once a week or so, download the latest nightlies, run a standardized script of enquiries against a standardized configuration (static set of music files not otherwise used), and see if performance changes on any of the enquiries. If it improves, great. If it takes a sudden nosedive, that's an early warning that something in that code path needs attention (or perhaps it's a known thing, because it's supporting new function). Anyways, it's a warning system.

Michael

Philip Meyer
2005-10-14, 17:08
>It's not perl.exe which is accessing port 9090, but sstime.exe. Put the
>firewall in learning mode (or disable it entirely to be sure).
>
perl.exe is listening on port 9090.

The firewall always prompts when new applications attempt to send packets on the network. However, as the IP address is local, so I guess I don't see any prompts.

>> I can connect via telnet to localhost 9090, but as soon as I type
>> anything and press return, I get a disconnect with no output.
>
>I'd really say this is the firewall blocking access to this port.
No, it's not.

I found the problem - I have configured a Username/Password for accessing SlimServer. I removed the security setting and the sstime exe worked.

Phil

Niek Jongerius
2005-10-14, 23:26
I have placed the programs and the source for the Linux version on a regular page on my site. The site itself is under active development, so please bear with me when I have screwed things up again. See:

http://media.qwertyboy.org/mono/niek.aspx?Page=SqueezeBox

andreas
2005-10-17, 08:14
Michael Herger wrote:
> [0.375] Artists: 433
> [0.032] Albums : 544
> [0.031] Songs : 6543
> [0.031] Genres : 50
> [20.891] titles 0 10
> [16.235] titles 0 100
> [29.329] titles 0 1000
> [114.393] titles 0 10000
> This is on a Via C3/1GHz/512MB running SME Linux.

....and on a VIA Eden/500MHz/512MB running Trustix

[0.359] Artists: [120]
[0.014] Albums : [254]
[0.005] Songs : [3974]
[0.006] Genres : [45]
[0.218] titles 0 10
[2.948] titles 0 100
[30.750] titles 0 1000
[150.651] titles 0 10000

Michaelwagner
2005-10-17, 20:15
This is counter-intuitive ...

Michael Herger wrote:[color=blue]
> [20.891] titles 0 10
> [16.235] titles 0 100
Why would 10 titles take more time than 100?

MrC
2005-10-17, 21:04
This is counter-intuitive ...

Why would 10 titles take more time than 100?
I indicated in an earlier post that the testing methodologies being used here are un-controlled, and the margin of error is too high to have any meaning. Without proper controls put into place, and reduction of all extraneous variables, such "tests" should be for amusement only.

Niek Jongerius
2005-10-17, 22:58
>> Why would 10 titles take more time than 100?

Your guess is as good as (or possibly better than) mine. But there is
no introduced artifact from the test program that I can see (the very
simple source is here: http://media.qwertyboy.org/files/sstime.c).

> I indicated in an earlier post that the testing methodologies being
> used here are un-controlled, and the margin of error is too high to
> have any meaning. Without proper controls put into place, and
> reduction of all extraneous variables, such "tests" should be for
> amusement only.

And as I said in an earlier post, the measurements taken here show the
_real_ time it takes for the CLI to perform some database query. Yes,
the numbers that various installs yield are probably hard to compare
amongst each other, but the fact remains that this _is_ the time some
defined query takes to return results using the CLI. If the CLI should
give similar performance to a Slimpy/SB/SB2 when executing queries (and
I'm not knowledgeable to say they do, but I can't see why not), then
these numbers give a good indication of how long our beloved hardware
has to wait for the server to response.

Again, this is _NOT_ meant to show how the server performs in an ideal,
controlled environment, this is a down-to-Earth measurement of real life
installs. Can someone tell me how these performance measurements differ
conceptually from what the graphs show that Triode made that are in the
nightlies? Are they also "for amusement only"? They too give some idea
of a real install, and are not meant for an ideal, controlled environment.

Now if we could come up with a set of CLI commands that give a good
representation of what a real scenario would fire at the database, we
would have an objective indication of performance instead of vague
statements like "it is too slow" or whatever. _That_ is what I'm trying
to get to.

Unless I am totally off base here...

Niek.

Michaelwagner
2005-10-18, 07:41
The only thing that makes sense to me here is a caching artifact. But it didn't happen with Mikes other test on his other computer ...

mherger
2005-10-18, 10:46
> The only thing that makes sense to me here is a caching artifact. But it
> didn't happen with Mikes other test on his other computer ...

I'd confirm it and give a possible explication: that slimserver had been
idle for about four days before I did that test over a ssh connection. As
the mail and web servers on that machine turn 24/7 it's pretty probable
that slimserver had been swapped out.

--

Michael

-----------------------------------------------------------
Help translate SlimServer by using the
StringEditor Plugin (http://www.herger.net/slim/)

Marc Sherman
2005-10-18, 10:59
Michaelwagner wrote:
>
>>Michael Herger wrote:
>>[color=darkred]
>>>[20.891] titles 0 10
>>>[16.235] titles 0 100
>
> Why would 10 titles take more time than 100?

Ramp-up anomalies (due to pre-fetching, caching, lazy code loading, etc)
are very common in performance testing. The usual methodology to
eliminate those effects is to run the entire series of test a few times
first and throw those results away, before you start recording
reportable results.

- Marc

MrC
2005-10-18, 11:38
Niek, your tool is fine. The problem is in the way tests are being run, and then folks looking for an explanation as to why the numbers are anomolous.



And as I said in an earlier post, the measurements taken here show the
_real_ time it takes for the CLI to perform some database query.

Sorry, your test tool does not quite do what you say... it does not measure _the_ time it takes, rather, it measures only _a_ single run which by itself *plus* all the perturbing affects of an uncontrolled system, is terribly inaccurate. The numbers reported early demonstrate and support this. A single run of the tool is competing with the rest of the processes running on the system - there are dozens or hundreds of threads also running and competing.

Your test tool does not isolate the various affects that perturb measurements, so its the benchmarker's job to defeat such affects. With the earlier results reported being 2-3x out of agreement, it is clear that what is being benchmarked, is not in fact what you believe is being measured. Therefore, drawing any conclusions is not very meaningful, or useful.

Numerous background processes, virus scanners, network activity, disk spinup time, low-power to max-power CPU speedup time, swapping, disk cache, hardware interrupts, are factors which need to be eliminated and reduced before conclusions can be drawn.

Niek Jongerius
2005-10-18, 23:57
>> And as I said in an earlier post, the measurements taken here show the
>> _real_ time it takes for the CLI to perform some database query.

> Sorry, your test tool does not quite do what you say... it does not
> measure _the_ time it takes, rather, it measures only _a_ single run
> which by itself *plus* all the perturbing affects of an uncontrolled
> system, is terribly inaccurate. The numbers reported early demonstrate
> and support this. A single run of the tool is competing with the rest
> of the processes running on the system - there are dozens or hundreds
> of threads also running and competing.
>
> Your test tool does not isolate the various affects that perturb
> measurements, so its the benchmarker's job to defeat such affects.
> With the earlier results reported being 2-3x out of agreement, it is
> clear that what is being benchmarked, is not in fact what you believe
> is being measured. Therefore, drawing any conclusions is not very
> meaningful, or useful.
>
> Numerous background processes, virus scanners, network activity, disk
> spinup time, low-power to max-power CPU speedup time, swapping, disk
> cache, hardware interrupts, are factors which need to be eliminated and
> reduced before conclusions can be drawn.

All true, but please bear in mind what this tool actually tries to do.
There are quite a few complaints about performance of the server.
Performance in this context is something that is perceived, it is not
a measurement of "top speed". When people complain, they probably just
tried to use their SB. During that test, there were all sorts of other
processes running, just as you explained. That very experience of
performance makes them act and send out a call for help.

This tool tries to do exactly that. It is _intended_ to run on a system
that is polluted by all sorts of junk. The measurement would not be
realistic if there wasn't any real life interference by whatever tries
to slow the server down. All we have now is some vague indication of
performance. If someone complains "the server stalls when I navigate
to that menu, then click right, and then press play", it could be very
handy to have the queries to the database that correspond to his actions,
and have his server (running all the junk that is messing up the machine)
to spit out a more tangible value than "it is sooo slow".

I don't expect the tool to be very accurate in the light of all that is
said, but the bottom line is that if someone wants their toy to play a
piece of music, and it takes say 1 minute to start the play whereas a
"normal" server should be able to start in about a second, this tool
could give a more accurate indication of what the user experiences. If
the stats are very poor, maybe people could do some digging into what is
making the server so slow. Turn off whatever service they suspect, run a
couple more tests (using the same tool with the same queries on the tuned
server), and if these new tests show a significant and consistent drop in
response time (say, a factor of two or three), then I guess they are on
to something.

These are just ballpark figures (and very probably a huge ballpark at
that), but still the tool could be used to quantify what people see on
their messed-up server. It is no different than the server stats that
the nightlies can spit out. They too have to be scrutinized with care,
and cannot be readily compared to other installs.

Niek.

kdf
2005-10-19, 00:02
I'm sure what they all really mean to say, Niek, is thank you very much
for your contribution. :)

-k

Niek Jongerius
2005-10-19, 04:16
> I'm sure what they all really mean to say, Niek, is thank you very much
> for your contribution. :)

I know. I was just replying "you're welcome, grab a cold one and
put your feet up".

Cheers, Niek.

Michaelwagner
2005-10-19, 05:54
Yes, I agree with the above. Thanks for the tool.

It is what it is, it's a measure of what really happened this time. It's not necessarily an accurate, repeatable measuring instrument useful for finding and squashing performance bugs - you need a better (and more calibrateable) test bed for that - but it does measure the "user experience" and for that it's useful.

In private email a few days ago with Dean I offered to start some performance benchmarking of the code (initially I am interested in the MP3 scanning code) with an eye towards code improvement.

I can't start now - I'm in the midst of quoting a million things in my day job, and in my night job I'm helping my girlfriend move her retail operation into a new, double the size storefront. Doesn't leave much time for leisure activities :-) But I'll get started in a week or two, after the store is open and I get a day or 2 off.....

Michael

MrC
2005-10-19, 09:58
I too agree that a Thank You is due for the contribution. Again, my comments are not directed at all at the tool, the contribution, or the author.

And my contributions here are educational, intended for those that do not understand the issues related to benchmarking, and have expectations that the numbers received are indicative of slimserver problems.

Correct me if I'm wrong - as a post in the General discussion form, which has an audience with various knowledge levels, it does seem reasonable to provide such insight as to what causes anomolies, and how benchmarking and performance evaluation must be controlled to draw meaningful conclusions. In essense what I'm saying to those that don't have this background is: understand before blame.

I'll close again with a thanks to everyone for helping to make such an outstanding product!