Thursday, March 22, 2012

3.33 millisecond accuracy in datetime

I'm still not clear on the technical reason why datetime values are
accurate to 3.33 milliseconds. I think I understand that it has to do
with the fact that what is stored for the time value is the number of
ticks from time 0 of that day which is stored as the number of days
from day 0 (1/1/1900 i believe). What I can't seem to get is why a cpu
tick is equal to 3.33 milliseconds. Is this an electrical engineering
related issue based on voltage/frequency type things (i'm obviously not
an electrical engineer) and how the tick is "fired" or some other type
of software based limitation. Any info that could help clarify for me
is appreciated.This is a multi-part message in MIME format.
--040809030603070702040001
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
I don't think it has anything to do with CPU clock ticks. A datetime
datatype is stored as two 32-bit integers: 1 for the date part and 1 for
the time part. Ignoring the date part and just looking at the time
part, you can see that there are a finite number of possible values for
the time.
There ought to be 2^32 different possible values, which equates to an
accuracy of about 1/49,710 of a second (that is,
1/((2^32)/(24h*60m*60s))), but for some reason the Microsoft developers
decided to limit it to 1/300 of a second. I don't know why but I'm
guessing it's convenient for calculations etc. Whatever the reason,
it's not about CPU clock ticks but rather how the data is physically
stored. This is the reason, I believe, that smalldatetime is only
accurate to the minute, because with only a 16-bit integer to store the
time information it can only store a maximum of 65536 different time
values (which equates to about 45 intervals per minute and so making it
accurate to a minute seems convenient).
--
*mike hodgson*
http://sqlnerd.blogspot.com
lairdnet@.gmail.com wrote:
>I'm still not clear on the technical reason why datetime values are
>accurate to 3.33 milliseconds. I think I understand that it has to do
>with the fact that what is stored for the time value is the number of
>ticks from time 0 of that day which is stored as the number of days
>from day 0 (1/1/1900 i believe). What I can't seem to get is why a cpu
>tick is equal to 3.33 milliseconds. Is this an electrical engineering
>related issue based on voltage/frequency type things (i'm obviously not
>an electrical engineer) and how the tick is "fired" or some other type
>of software based limitation. Any info that could help clarify for me
>is appreciated.
>
>
--040809030603070702040001
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>I don't think it has anything to do with CPU clock ticks. A
datetime datatype is stored as two 32-bit integers: 1 for the date part
and 1 for the time part. Ignoring the date part and just looking at
the time part, you can see that there are a finite number of possible
values for the time.<br>
<br>
There ought to be 2^32 different possible values, which equates to an
accuracy of about 1/49,710 of a second (that is,
1/((2^32)/(24h*60m*60s))), but for some reason the Microsoft developers
decided to limit it to 1/300 of a second. I don't know why but I'm
guessing it's convenient for calculations etc. Whatever the reason,
it's not about CPU clock ticks but rather how the data is physically
stored. This is the reason, I believe, that smalldatetime is only
accurate to the minute, because with only a 16-bit integer to store the
time information it can only store a maximum of 65536 different time
values (which equates to about 45 intervals per minute and so making it
accurate to a minute seems convenient).</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span> <b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2"><a href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
<a class="moz-txt-link-abbreviated" href="http://links.10026.com/?link=mailto:lairdnet@.gmail.com">lairdnet@.gmail.com</a> wrote:
<blockquote
cite="mid1146757782.740076.309510@.g10g2000cwb.googlegroups.com"
type="cite">
<pre wrap="">I'm still not clear on the technical reason why datetime values are
accurate to 3.33 milliseconds. I think I understand that it has to do
with the fact that what is stored for the time value is the number of
ticks from time 0 of that day which is stored as the number of days
from day 0 (1/1/1900 i believe). What I can't seem to get is why a cpu
tick is equal to 3.33 milliseconds. Is this an electrical engineering
related issue based on voltage/frequency type things (i'm obviously not
an electrical engineer) and how the tick is "fired" or some other type
of software based limitation. Any info that could help clarify for me
is appreciated.
</pre>
</blockquote>
</body>
</html>
--040809030603070702040001--

No comments:

Post a Comment