I am trying to decode the output from a LVU30 ultrasonic sensor (http://www.omega.com/manuals/manualpdf/M4583.pdf) using a CR1000 and a RS485 to RS232 convertor. I can decode everything except the stage output. The footnote states the following:
Range to target (average): Byte 3 (LSB) combined with byte 4 (MSB) to form 2 bytes, then divided by 128 (inches)
Does anyone have any idea how to use the Least Significant Bit (LSB) and the Most Significant Bit (MSB) for decode water level?
The third and forth byte HEX output is <F8><03>. What would the water level be?
Thanks
Dave O
* Last updated by: DaveO on 9/23/2013 @ 2:24 PM *
I think the LSB and MSB in this case refer to the Least-significant *byte* and Most-Significant *byte*. (not
the least/most significant bits).
The way that the "bits" typically play into things is that
the most significant bit of any particular byte (8 bits) is given first (regardless of byte order - same for both MSB and LSB).
So Byte 3 is the least significant byte, followed by byte #4 which is the most significant byte. Together you have a two-byte value. You can treat it like a 16-bit "IntelCPU-ordered" unsigned integer.
The compact way of doing this is to take the last
byte's value (03), convert to decimal (3) and
multiply it by 256 (because it is the 2nd byte/MSB
and hence its powers of 2 are shifted by 2^8=256)
which gives you 768, then convert F8 to decimal
(248) and add those two values together: 768 + 248 = 1016
1016/128 = 7.
----
Here is an extended explanation:
The first byte (Byte 3, LSB) with a value of F8 maps
out into bits like this:
1111 1000
1111 = 2^3 + 2^2 + 2^1 + 2^0 = 8+4+2+1 = 15 = F (in hex)
1000 = 2^2 = 8
The above confirms this as the bit pattern "F8"
Now you consider all 8 bits of the byte together: the first bit given corresponds to 2^7 (most significant bit, largest power of 2), down to 2^0 for the last one (0).
So you get (128 + 64 + 32 + 16) + (8 + 0+0+0) = 248
The second byte (Byte 4, MSB) is given as 3
0000 0011
However, as the most significant byte, the first bit
corresponds to 2^15, down to 2^8 for the last bit.
So we get 2^9 + 2^8 = 512 + 256 = 768
So the first byte, F8, gives 248 and the second byte,
03, gives 768 for a total of 1016.
Your scaling instructions state that you should divide by 128 to get a result in inches:
1016/128 = 7.
So <F8><03> represents 7 inches for the stage height (range to target - presumably the distance from the sensor to water surface)
Thanks for the information. Is there easy program steps in a CR1000 to do that conversion for the stage?
Dave O
1016/128 = 7.9375. So the answer is 7.9375 inches, not
7 inches as I gave before.
Here is a demonstration program:
Public myLSByte As String = "F8"
Public myMSByte As String = "03"
Public DecLSB 'Decimal value of byte
Public DecMSB
Public StageHeight
Units StageHeight = inches
BeginProg
Scan (1,Sec,0,0)
'A set of instructions would go here to obtain the third
'and fourth bytes from the sensor's total output
'(from the string/byte array)
'Probably would use SplitStr, Left, Right etc. if the
'values are given as "Hex strings"
'myLSByte = "F8"
'myMSByte = "03"
DecLSB = HexToDec(myLSByte)
DecMSB = HexToDec(myMSByte)
StageHeight = ((DecMSB*256) + DecLSB )/128
NextScan
EndProg
Additionally, I gave this in my previous post:
"1000 = 2^2 = 8"
but it should have been this:
"1000 = 2^3 = 8
After you run the previously posted program, you can change the values of myLSByte and myMSByte in the public table and it will calculate and show you the Stage height for those new byte values.