DSOQuadV2.6 RESOURCE(Update 2012/04/12)

Please Please PLEASE make it clear what files have changed and need to be installed onto the quads, if any, in this release. All i see is a list of update with NO IDEA which it the most current. THis request has been made several times now, so much for listening to customer suggestions etc …

Thank you HugeMan for the update instructions … Much more user fiendly :smiley:

Cheers Pete

As some firmware are not compatible with some software, the next step is an array with the up to date version associated to each hardware.

Hardware #, SYS#, APP#, FPGA#
2.6 , 1.34 , 2.35 , ???
2.2 , ??? , ??? , ???

I created a page on the Garden for just that.

See that page here

Please feel free to make this page easier to use. I am updating it when I can.

Would it be possible for someone from Seeed to package each update in a single zip file? This makes it much easier to link to on the Wiki.

Hello Folks,

I am just starting to use my Quad for the first time as a tool to measure some I2C signals from sonar device. I am seeing a lot of flicker on the screen at 20usec/Div and the system only triggers about 1 time in 5 at 10usec/Div. I am using channels C/D for inputs. Channels A/B just gave me a badly slewed wave that was all but useless to read.

I was also wondering why there is a threshold on the trigger for C/D since they only show as logic 0/1 with a single Div difference. The only sensible trigger for these inputs it leading/falling edge. It gets confusing when you switch between trigger sources C/D only to find that the trigger level is set outside the signal level that you have no control over.

Since the inputs C/D have no signal processing like A/B I can only assume that the firmware is missing trigger events on these channels. I would be expecting to be able to see a 10 meg bandwidth at the very least on these inputs. Perhaps someone can look at the trigger logic for the digital inputs

Cheers Pete.

I just got my DSO Quad HW2.6 today, and started playing with it immediately. It had SYS_134/APP_235 firmwares on it, so I upgraded to the latest sys_141/app_243…

But has anyone else had issues in that it doesn’t save your settings/preferences by hitting the “o” key? It claims it does, but you shut it down and turn it back on, and everything is reset to defaults.

Am I wrong that it shouldn’t save over a power cycle?

yes ,it does. it seems this version of firmware do not support the settings/preserence well . i have sent this bug to designer and he told me it will be modified soon.

Apart from the bug on the settings, I’m glad to see the new functions and that now upgrades are documented in a txt file joined to the binary.

OK so we have an update to the firmware, thanks for trying to make progress however, and i am not being picky here but, the function to export the data in .CSV added(Main.c,files.c Menu.c) is all but useless.

The Header reads …

TRACK1 5V ,TRACK2 0.1V,TRACK3,TRACK4

With the data reading

144,098,060,020

We have no indication as to the sample rate and so cannot determine the time line for plotting this data additionally I can only guess that the values in the data are pixel offsets on the display but since we don’t have the trace zero offset we cannot determine the voltage of these values even though we are told the vertical scale of the trace.

The same applies to trace C and D without knowing the zero offset of the trace on the display we cannot determine the logic level of the trace this is compounded by the fact that we don’t have a vertical scale for this trace either although i will accept that the logic levels are (i presume) one grid division. Whilst we are on the subject of the logic inputs what is the trigger point on these inputs what levels are you using TTL/CMOS/Etc how is to work on 3.3 volt logic circuits ?

We have no trigger value/type/offset in fact we have none of the information provided by the BenF export that is required to make the data portable.

Please have a good look at the export format on the Nano under the BenF firmware and try and at least get close to that.

Cheers Pete.

I haven’t tried interpreting the CSV output on my own projects yet, but consider this: the resolution is stated as being 8-bits per sample, so I’d interpret the values as being 000-255 of whatever scale your track is set to in the CSV header. NOT a pixel offset. It doesn’t give you a time reference, but perhaps the XPOS rate can help in that regard.

I think your missing the point here. in order to make the data “portable” you CANNOT make any assumptions about the scale of the data. I take your point about the resolution of the ADC but that falls over when you are talking about the logic channels. Could you also explain how you can work out the timebase of a sample based on the XPOS what ever that may be. in order to work out the frequency of an event you MUST have the sample rate.

In order to replicate the waveform as seen on the quad you must also have the division timebase and the division amplitude.

using the data as we have (0-255) it how do represent a negative value without knowing the zero point.

with the data as we have it we can plot a graph of unknown time against values of 0-255 with no reference to their base. like i said “All but useless”

Cheers Pete

BIG FAT DISCLAIMER: My comments here are not based on any empirical tests I’ve done. And I’ve never seen any of the Quad source code to say. I’m not involved in the Quad development in any way.

No, believe me, I got your point. I feel your pain whenever something is not right in the universe. But I think you haven’t taken the time to give your situation enough thought in some aspects. I was being vague for two reasons: 1. As my disclaimer stated, I was just making observations at this point. I hate stating things and being discovered wrong later. 2. Since you had a pressing need for the data and I do not, maybe you can take this information and run with it in a direction while the proper person can formally fix the issue.

You should brush up on binary number systems.

Logic levels: typically 0/1, except on Pluto (as my instructors would say). So for C/D channels, use an arbitrary threshold. Again, not having done experiments and used mine for only a short amount of time… I’d say 128, but it could be anything. When I tried a digital signal on my Quad, the peaks for the digital channels seemed very low ON THE DISPLAY, so this ‘1’ value may be a low integer value in the CSV. Anything above that level is a 1. Below that is a 0. Do an experiment: hook your C/D channel to ground, then touch +5V and look at your CSV data. It would make sense that halfway is your threshold.

DC levels: Typically do not flip polarity during measurement. If they do, you’ve either got a short or an AC circuit. :slight_smile: In either case, I doubt your Quad would like it very much at worse, or report anything useful at best. Hence, I’d assume 000 is GND, 255 is the scale according to your CSV header value.

AC levels: One man’s 000-255 is another man’s 000-127 and -128 to -1. None of my data saw a negative sign, only 3-digit zero-padded numbers. So my assumption is that the CSV contains a UINT8_t (unsigned 8-bit integer), but should be interpreted as an INT8_t (signed twos complement 8-bit integer). See en.wikipedia.org/wiki/Two%27s_complement

As for your timebase, take a look at your display. In the bottom right, you see “XPOS” written in orange. Look at the top, to the right of your D channel, where it typically says “AUTO”/“NORM”/etc… under that is the time base. Written in orange.

Good luck!

I agree with Bainsbunch. His post in this thread of 8 June is a methodical and well thought out description of what is lacking.

After seeing the improvement of the Calibration process, I am confident that Bure and associates can also provide the needed CSV file header type of data described by Bainsbunch in their next CSV release.

It would also be very useful if each data point was a data point from the capture buffer locations, not just from the display. I don’t know how many points are provided now, but the entire capture buffer would be much more useful if not already provided. If SD memory size is considered an issue, then allow the user to capture either the display or the entire capture buffer just like the DSO Nano allows. As Bainsbunch has already pointed out, look closely at how the Nano manages files.

Doh thats is why i am posting here

Here you go with the assumptions again. I think if you bother to check your facts i have not stated anything that is not based on actual findings

Oops another assumption and please don’t tell me that there is some magic method to determine if an 8 bit binary number is a signed or unsigned int. If it is the range of 0-255 with no indication as to its type how can you tell a 2s-compliment negative number from an 8 bit positive number … come on now if you are going to try and quote binary logic at least think it through before posting.

Now if our number was to be an 8 bit signed int then we now only have 7 bits to play with and our resolution goes down to 128.

Doh well yes … but what use is that if it is not in the export file and even if it was what use is it without the sample rate. The timebase can be used to reconstruct the signal shape but only with the sample rate.

Please explain to me how you would plot say 500 samples in the y axis against a time based grid without knowing the timebase or the sample rate. How would you even begin to construct a faithful representation of the original.

Once you have got that worked out then you can explain how you plot a bipolar dc square wave in the X axis with a 60% positive and 40% negative amplitude from zero without knowing the zero offset unless we accept that our x axis is limited to +/- 127 and we assume (not that I would ever do that :slight_smile:) 127 is our zero signal. Oddly enough though with all my probes grounded I am getting the following levels from the CSV file 144,098,060,020, perhaps a re-calibration is in order.

I suggest you have a good think about the problem before posting your next reply, and stop simply being contrite in your contribution.

One final thing about the Y values here is a screen shot of the waveform that the export is based on. Take a GOOD look at the signal offset from the bottom of the screen and tell me that it does not look like the values 0-255 are in fact based on pixel position working up from the base of the graph.

I.E. 20 (4 Channel D) then 60 (3 Channel C) then 98 (4 Channel B) then 144 (4 Channel A)…

IMAG001.gif

Hmmm is the value 0-255 actualy the DAC 8 bit resolution or another assumption :wink: on the part of PommieZ

OK I have edited this post to add the following screen shoot of the traces all with thier base lines set to the middle of the display as seen here

IMAG001.gif

with the following data coming frorm the export file

TRACK1 10V ,TRACK2 10V ,TRACK3,TRACK4,

100,100,101,101,

Ahh a picture paint a thousand words …

Cheers Pete.

With your award winning charm and humility, truly, I can’t imagine why you’re having difficulty getting assistance. Quit being so defensive and animus.

Wow. Just, wow. First you come up with some wild idea that something impossible can be done, and then you try to claim I came up with the impossible idea and pin ME as being the idiot. I never claimed any such feat was possible. That is why I stated YOU may have to INTERPRET it that way. You know, because YOUR brain has to make the decision whether it is a valid range… not some “imaginary magical computing process”.

Dude, I’ve been programming for 25 years. You burnt your bridge with me, so I’m done lobbing first-year compsci answers back to you.

Wrong. Your resolution isn’t “128”. Your resolution is 8 BITS. It will ALWAYS be 8 bits. Whether your number range is 0-255 or -128 to 127, your resolution is the same… as in: you can resolve up to 256 unique values.

Did I state it was somehow magically embedded in the CSV? No, I did not. If you understood the construction of a CSV table of data, you’d understand that there is no logical place to PUT the time resolution into it that both makes sense and is constructed logically. XML would have been a better design choice, because you could embed such metadata, but guess what: you don’t have that choice. You have CSV. Suck it up.

Aren’t you precious, arrogant as hell and then asking for the answer. If you were somewhat less close-minded, I’d take the time (tonight, when not at work, without my Quad here) to help, but some of us have day jobs and have to prioritize. It should be obvious in 10 seconds how to write a perl script that mangles it in a way you need it to. I’ll get you started: thereifixedit.pl -scale:2 mydata.csv fixed.xml

And perhaps you also have unrealistic expectations in terms of accuracy, where it relates to floating ground voltages for whatever you’re using for a ground. Depending on the voltage scale of your Quad, your calibration, and what you grounded against, those numbers may be reasonable. You’re assuming you need to recalibrate.

Beggars can’t be choosers, and some children don’t play well with others. Like I said earlier, your problem is not MY problem. But now, it’s really not my problem.

@PommieZ
I think you don’t clearly understand what Bainesbunch is expecting.
On the screen of the Quad you can have at the same time two traces with different vertical scales so if the information is not on the csv file you miss some parameter to reconstruct (or make some computation) on the data.

If one trace is 100mv/div and the other 5V/div and both are coded in the same 0-255 space how can you compare them?

More if the dump to csv file is not the acquired value but what is displayed it has no meaning. What we expect in the csv file is raw value with the calibration information.

IN THEORY, the header line of the CSV has the scale. It states the track column (A,B,C,D), the range (DC or AC; so perhaps a UINT8_t for DC, INT8_t for AC? This is yet to be proven), and the scale in mv/div.

So although the data has values of 000-255, you would need to do the math involving the scale indicated in the column header.

For simplicity, if it were DC without any polarity switching, I would expect this to be akin to a percentage of the header’s scale. So for values of 000 < x < 255, this would equate to 000 < x < 100% in a linear scale, multiplied by 100mV/div for the track, depending on the matching column. You would do similar with 5V/div for the second track, using that column header’s information.

Each row, therefore, is a set of samples taken in the time division you pre-established when you recorded the data.

The ‘scale’ of the row values, however, is speculative. For all I know, DC is not 0-255 and in UINT format. It could very well be only 0-127 on DC data. This is why I was suggesting that Bainesbunch experiment by recording known limits, such as GND and +5V in a 5V/div scale.

So I’ve done my own tests, now that I got home and had a chance to use my own data. Bainesbunch is correct in that the values stored are of a simple Y-position, based on what the screen displays.

The scale (or value range) is from 0-199, inclusive, and matches the vertical plot-point on the screen. I fed the WAVE OUT into CH-A, created a 20Hz square wave at close to the top of the screen (so it might purposely wrap) to see if there was any limits. When you look at the CSV data, it shows transitions like this:

199,103,060,198,
199,101,060,198,
180,101,060,198,
180,103,060,198,

180,101,060,198,
180,103,060,198,
199,103,060,198,
199,101,060,198,
…etc…

It didn’t matter what sample range you did this in; the values are just Y plot positions, regardless.

Also, the TRACK D data consists of a ‘pure zero’ of the function (C|D), in order to ignore any differences he and I might have had with our two calibrations. I placed it up to the top, then backed it down one notch, hence: 198.

Do I agree with it? No. It most definitely is useless. Is there a better way to do it, and do I wish it was done the way I outlined? Yes. :slight_smile: As I said from my first response, I was being vague until I had real data to test it for myself.

And those .DAT files… they’re equally as brain dead.

infact , i have already suggest bure the most 2 important issue: 1. preset error 2. .csv file improvement

Ohh dear me … such a shame that personal attack is the last weapon of a lost debate.

Just 25 … well perhaps you should come back in another 5 when you have caught up with me … I cut my teeth on intel 6502 and Zilog Zxx kit. Please make no presumptions about my programming skills. I guess the fact that i have written a client to display the old Nano xml output that is featured here on the forum may have passed you by.

How arrogant are you to assume that you may be a better coder than I am and that this some how allows you to berate me …

I understand fully the requirements of the output file to create a meaningful reconstruction of the data in an external client, after all I have written one.

This lacking, in the file, was the point of my first posting that, without facts, you chose to attack as conjecture. Time has proved me correct, and by your own admission my original statement " All but useless" has also proved correct.

Since you have decided that , because I guess you where wrong (and I am, too stupid to understand your argument), you no longer want to talk to me, this debate is now closed.

However the problem with the file format remains and I will continue to make my observations.

Cheers Pete.