1. Interesting use of checkboxes instead of radio buttons on a lot of the questions. Not sure if this was accidental or an acknowledgement of ambivalence (being torn between strongly agreeing and strongly disagreeing) as different than “nuetral”. Either way I worry about the integrity of the survey method.

  2. Funny you should mention the integrity of the survey; I rather assumed it would be unscientific. If I remember well, the much bandied faith survey of Unitarian Universalists (the instrument was in the Unitarian Universalist World) was self-selecting and unscientific, but gets remembered with more “science” in each re-telling. I remember a lot of Christians being displeased with their options and opted out (I did) — “praying to God” wasn’t even an option for what one could do in worship — and voila! the Christian portion of the UUA shrank. I vowed not to make that mistake again.

    Call me a cynic, but the only bad UUA poll is the one you don’t have your opinion in.

  3. Well, even if you don’t meet scientific standards I think there is still a certain set of standards you want to meet to have any validity associated with the results you produce. For example I hope that there are measures in place to prevent ballot stuffing, granted there are limitations in an anonymous process that mean you can not completely prevent ballot stuffing, but you can certainly make it more difficult. At absolute minimum I would hope that some basic standards are met (answers are recorded correctly, answers reported match answers asked, etc.).

    Typically you would use checkboxes for non exclusive answers, for example it makes sense to report “80% of visitors ate pancakes, 90% percent drank coffee, and 75% ate french toast at this morning’s Sunday breakfast” you are not expecting a sum total of 100% in the answers. You would use radio buttons for exclusive choices “30% arrived more than 10 minutes early, 20% arrived 10 minutes or less prior, and 50% of attendees arrived late”. It really doesn’t make sense to say “80% arrived early and 70% percent arrived late”. That’s just nonsensical data.

    And mainly I am just scratching my head wondering if the person just doesn’t understand the survey software, or if they actually intended the survey to look like that. It would be like seeing a question on a survey where the person asked sexual orientation with 2 checkboxes, one for “straight” and one for “Gay” with the expectation being that anyone that was “bi” would simply check both, and anyone selecting neither is “a-sexual”.

    The only way I can think of to report the data being collected with validity would be to report all permutations of the answers “20% strongly agreed, 19% strongly disagreed, … 2% checked of both Strongly Agree and Strongly Disagree, 1% checked both strongly agree and agree, etc….”. But my worry is that this is not how the answers will be reported, I suspect they will simply report the percentage of people selecting each checkbox. Another option would be to exclude any survey submitted with multiple answers to any particular question.

    I think any poll is bad if it generates inaccurate or misleading results, but it’s really bad if the poll is treated as if it were valid.

  4. great to see this is getting around. its not scientific per se. its anthropology. combined with interviews of uu’s it will give us a picture. also dont assume how its created. im no expert but i have a method. glad you all are working this over. i had no idea how far it went. daniel
    ps. i am the former assistant minister at kings chapel.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.