Negative observations

Forums Variable Stars Negative observations

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #574478
    Dr Paul Leyland
    Participant

    Ever since first submitting results where the VS is too faint to estimate (visually) or measure (photometrically) I have reported it as being fainter than the faintest comparison which could be measured. It seems to me that perhaps one should report it as being fainter than the dimmest field star which can be measured with confidence.

    The motivation for this post comes from a recent observation of SV Ari and its field galaxy, where the VS is completely invisible on the image in question. The faintest comparison is 169 and so the report in preparation now effectively contains “[16.862 ± 0.022” for the variable.  However, photometry of the galaxy also yielded magnitudes of other field stars. I didn´t keep the measurements of them but remember that an accuracy of ~0.1 magnitudes in Johnson V were available below V=19.5 or so.

    What do others do? Should I continue to report [16.9 or change the pipeline so that a report of something like “[19.456 ± 0.123” is given?

    Supplementary question: what about historical data? All the old images can be re-measured and the results re-submitted if it were thought useful.

    #581792
    Gary Poyner
    Participant

    You should report the faintest star visible (recorded) which is measured on the sequence you are using.  Adding fainter stars of your own means you are altering the official sequence, and the data might not be accepted into the VSSDB (depends on the object – I think).

    If you wish to help extend the sequence to SV Ari beyond 169, then you should have a chat with Jeremy.

    Gary

    #581793
    Dr Paul Leyland
    Participant

    I understand what you are saying but it goes against the grain to throw away information.  Here’s my reasoning.

    Suppose that on the night in question SV Ari was at V=19.45 and that I had easily enough SNR to measure it to an accuracy of 0.01 magnitude based on an extrapolation of the sequence magnitudes down below 169. I would report a positive result.

    Suppose that a field star was also measured on exactly the same image at, say, V=19.61, also to an accuracy of 0.01 magnitudes.  I feel I would be justified in recording it as such, if only in my own records.  Note whether or not that second star is a variable is irrelevant because it is being measured at a specific point in time.

    Now, a week or so later, SV Ari has faded to a true magnitude of, say, V=22.0 which is way below the detection limit.  However, that same field star is still measurable on an image taken at the later date.  For the sake of example, let´s say it is now measured at V 19.62 with accuracy 0.01, again using only the official sequence. It is quite irrelevant in this particular instance whether that star has truly faded slightly or whether the difference between the two measurements arises for SNR reasons.  It is quite clear that SV Ari at this date is significantly fainter than V=19.6.  It seems wrong to me to throw away the additional information about the limit on the brightness of the variable.

    Please note, in the latter case, I would NOT be using the V=19.6 star as part of the sequence to determine instrumental magnitudes and their errors.  All of that is still being done exclusively with the standard sequence through a lengthy extrapolation.

    Yes, I´m quite prepared to work with Jeremy and/or the AAVSO to extend the sequence in this case and others.  However, prospective additional sequence members will need to be checked that they do not vary significantly on timescales ranging between hours and years before they can be used with confidence. (This issue has already bitten me: I discovered that one of the  AAVSO comparisons for V3721 Oph is an EA with minima 0.025 and 0.010 magnitudes.) My suggestion, on the other hand, requires no assumption of constancy, only that the limiting magnitude can be measured at the time of observation.

    #581808
    Gary Poyner
    Participant

    It makes sense to me, and I understand your frustration at recording a <169 star when your nearly at 20, but the sequences for many stars are embedded into the DB, so if you report a negative value of <196 when the listed sequence ends at 169, then that observation might be rejected.  Not all stars sequences are entered into the DB, and I’m not sure about SV Ari, but it might be.  If it isn’t, and you enter <196 with a sequence code for a sequence which ends at 169, then that observation will be accepted but will cause some confusion when the data is looked at.

    You might report such an observation with a different sequence number (one which you have designed yourself).  This would then be flagged as an unknown sequence, but it would make it into the DB.  Then in 20 years time or so when SV Ari wakes up again, you could monitor the outburst using the recognised sequence of the day – by which time the limit might be in the 20’s.

    The less confusion in the DB, the better for all concerned.

    Happy Christmas…

    Gary

    #581809
    Jeremy Shears
    Participant

    This paper by Arne Henden outlines how you can determine the faint limit of a CCD image (see page 75 for Arne’s paper). As Arne points out: “the devil is in the detail”.

    This is for special projects. In other cases one should stick with the sequence for the reasons Gary gives.

    #581810
    Dr Paul Leyland
    Participant

    My submissions to the DB include measurements of the full sequence.  Accordingly, I don´t see why confusion should happen. If the sequence changes, through the addition of fainter members perhaps, all significant information is present to reduce to the new sequence.

    For the example given, a snippet of an entry would look like

    VarAbsMag VarAbsErr CmpStar RefMag RefErr CMMag CmpErr

    [19.6203 0.0123 169 16.862 0.022 16.765 0.0095

    where everything other than “169 16.862 0.022” is fictitious, invented as an example, and the CmpStar through CmpErr fields for the rest of the sequence have been omitted here for brevity; they would be present in the true submission.

    #581814
    Dr Paul Leyland
    Participant

    Thank you. I’ve read that paper in the past, and imaged M67, but it’s good to read it again. Section 4 (p77) is particularly relevant, especially the comment about the difference between visual and CCD estimates of a 17.4 object when the image goes down to 20 or so. Another apposite comment is on page 79: This magnitude-bridging technique is common in the professional world, as most of the standard stars are too bright for large telescopes.

    Please note that for present purposes I am emphatically NOT trying to detect the faintest possible object on the image. I am trying to measure the magnitude of the faintest object which has an error smaller than a specific limit, 0.1 magnitude say. In this case the SNR is way above the 5-sigma limit mentioned in the paper.

    As I pointed out earlier, if I used a sequence which ends at 16.9 to measure a variable at say, 19.5 +/- 0.1 that estimate would be accepted without question. Why should a measurement to the same accuracy of an equally bright nearby star be rejected purely because it is not a (known) variable?

    Behind all this is my firm belief that one should not throw away data. It should be preserved for later scientists to re-analyse if they wish. For my part I store every image which is not too badly corrupted by focus errors, guidance errors, passing clouds, etc. In only 18 months I already have more than thirty thousand images, together with their metadata in a SQL database. All can be retrieved and re-examined for whatever reason — pre-discovery observations, perhaps, or searching for previously unknown variables.

    Don´t misunderstand me: I will continue to play by the rules as they stand but it seems to me that the present rule is extremely conservative to use Arne Henden´s phrase.

    #582498
    Dr Paul Leyland
    Participant

    I now have software which measures everything it can find in an image and records all those data which have a formal error better than 0.15 magnitudes (which corresponds to a SNR ~7) in a SQL database, along with other meta-data such as the sequence used and relevant excerpts from Gaia-DR2. I will be able to report cases as discussed above to the BAA photometry database if given permission to do so.

    I can do rather more too. An article may be written for a forthcoming VSSC. The software will be made freely available on my web site in due course. Whether access to the database will also become forthcoming depends on a number of matters which include security, bandwidth and storage capacity.

Viewing 8 posts - 1 through 8 (of 8 total)
  • You must be logged in to reply to this topic.