DPTE – the blog!

Dynamic Process Tracing Environment User Discussion

Category Archives: Uncategorized

DPTE Scheduled Downtime Friday November 16

IMPORTANT: The DPTE system will be unavailable Friday, November 16, from 7:00am to 7:30am CDT (8am-8:30am EDT, 13:00-13:30 UTC) for a server update.  Please do not run subjects during this time.

The ins and outs of two DPTE studies

I just completed two studies using DPTE and thought others in the community might benefit from an account of my experiences.

The first study I completed relied on undergraduates recruited through the Purdue Psychology department’s undergraduate subject pool. My colleagues (Chris Agnew and Robert Kulzick) and I are examining whether the sequence of information people receive about potential military conflicts influences their willingness to express support for those conflicts. The point of the research is to explore differences in predictions about the dynamics of public opinion offered by rational choice and heuristic judgment theories. Briefly, rational choice theories suggest that varying the order of information people receive about potential conflicts should not make a difference to their judgments. The content of what people see or read should matter most. In contrast, Khaneman and Tversky’s work suggests that people are likely to be sensitive to information order effects.

This study ran smoothly. A number of respondents who used either the Chrome or Safari web browsers reported having trouble with the DPTE website. Those that used either Firefox or Internet Explorer were able to complete the study successfully.

The second study I completed used subjects recruited from Mechanical Turk for  research on the effects of cues about counterterrorism on anxieties about future terrorism. Once again, the study ran smoothly. We got responses from 50 people in roughly four days (we paid volunteers .25 cents for a 13 question survey). We told those that volunteered to complete the study using either Firefox or Internet Explorer and got no complaints about the DPTE website not working. We did not successfully get responses to all the questions we asked, but it looks like the problem was on our end: a number of the questions we fielded did not have values associated with the response choices and we did not end the study properly. It would have been nice to get the study done properly the first time, but if all of my mistakes only cost $12.50 I’ll be way ahead of the game.

NOTICE: Regular Downtime for Server Reset is at Midnight Eastern Time

Hi all. This post is from the software development team at the University of Iowa — the folks who have brought you the goodness that is DPTE. Please take into account when planning to run subjects, especially outside the United States, or even in the western part of the U.S.

To keep the server running smoothy we “reset” the software every night at 11pm central/12 midnight eastern time.

We chose this time so that in Europe it would be very early in the morning (4am UK) and very late here, so there would be the least chance of inconvenience, and no one would have a room full of people at those hours using DPTE. But now that people are running with MTurk, the possibility of large numbers of people running as subjects exists for just about any hour of the day, so we thought it would be useful to do this notice. The downtime only lasts for one minute, so it shouldn’t stop anyone from doing work. But if subjects are running they will all be kicked out at that time.

At the same time, this is important since implementing the “reset” seems to have cut down on the number of ‘crashes’ the server has by quite a bit. We haven’t experimented lately, but since we got a crash a few weeks ago it seems likely that if we remove the “reset” then they would become more frequent again.

We put a note on the front page of DPTE to tell people about this downtime and to contact us if it’s a bad time.

DPTE Unavailability Notice August 1, 8AM – 10AM CDT

The DPTE system will be UNAVAILABLE tomorrow, Wednesday August 1, from from 8am to 10am CDT (13:00 to 15:00 UTC) for a server and software upgrade. No authoring can be done during this window and any subjects running as of that time will be interrupted, so care should be taken to schedule accordingly.

Please do NOT plan to do any work including running subjects during this time. The downtime is to do some important upgrades that should improve performance, especially when running large numbers of subjects simultaneously. There will also be some software upgrades as we continue to roll out social experiment features.

We apologize for the inconvenience.

DPTE and Amazon Mechanical Turk

Recently some folks have begun to use Amazon Mechanical Turk (mturk.com) as a means of getting subjects to run through DPTE experiments. This can be a quick and easy way to get a lot of subjects who fall outside the traditional “college sophomore” demographic. If you are doing this, we’d appreciate hearing about it, and in particular, appreciate you posting any tips you might have.

We do know that the system may become overwhelmed if hundreds of subjects attempt to access it at the same time, so one significant tip is to batch your HITS on Mturk so that you limit the number who can access the system at the same time. We recommend batches of no more than 25-50 respondents. Schedule these so that one batch is completed before you release the next one. Unfortunately to date not batching in small numbers has resulted in losses of up to 10-15% of subjects who attempt to begin the study. Recent system updates should have helped with this, but batching is still a good idea.

One other suggestion is that you require Turkers to copy a completion code into the HIT using their subject ID assigned by DPTE. This will make it easier for you to verify that the code is valid. To do this you create an Announcement in which you show the system variable [[SYS_VAR_SUBJECT_ID]] on the announcement with text explaining what they should do and warning them not to close the announcement until they do. Make that the very last thing in your experiment and the whole experiment will close once they close the announcement.

Other ideas, experiences, and challenges?

New Materials in Repository

If you haven’t checked out the ITEM REPOSITORY in DPTE you should. It’s where we have made available lots of stimulus materials, as well as announcements and questionnaires. Newly added is material from Aaron Hoffman.  He has created a generic research participant consent form and a question that may be used to get subjects to indicate that they agree to participate in a research study. Look for “Research participant consent form” under the Announcements tab and “Informed consent to participate” under the questionnaire tab.

Thanks to Aaron and we hope others will contribute as well.

NEW FEATURES IN DPTE V2.4

Yesterday evening DPTE version 2.4 was rolled out. For those of you in the middle of studies, you should see no effects of this update. If you do find something acting oddly please contact us right away.

This update adds to our developing set of “social experiment” functions. The idea is that you will be able to mimic a social environment, including “sharing” stimulus items between subjects, recording “likes” and “dislikes” of items, as well as other reactions you would like to record, and most importantly, allowing subjects to actually comment on items and to have those comment recorded and visible to other subjects, much like the way a blog posting develops. This latter function is not yet in place, but will be added soon with the next update.

To see the changes, look at the “Experiment Setup” screen dashboard. There is a “Social Experiment Options” section under “Settings”, where you can see both the features we had already added and the new ones.  We think these options are fairly self-explanatory. (And we should note that the online documentation for DPTE has not yet been updated to reflect these changes, so you may have to experiment a bit.)

You can now designate individual stimulus items as “shareable” which then shows the new features to the subject. Once an item is shareable, the buttons appear allowing the subject to “share” it. When the button is clicked, a counter records the click, so you get counts of the number of times any subject in your experiment wanted to share the item. In fact, it is more flexible than that, because you can name the “social buttons” anything you’d like – for example, you can have buttons that say “share” or “like” or “hate” or whatever…

There are options to show the “click counts” on opened Shareable Items, and/or on the Flow Panel itself for a Shareable Item. The “click counts” of course are the total times clicked for each button, for each item, in a given Experiment (across all Subjects).

You can see these in the DEMO EXPERIMENT (#135) that you have access to if you want to look at the settings without modifying your own experiment.

Currently  Share-ability only applies to Stimulus Items. On the Stimulus Item screen, there is a button “modify social” (the button and the resulting window are in “Beta” — it will be hidden in the future if the Experiment isn’t designated as Social) in which you can see the Button Click Counts for that Stimulus Item. You can modify these values in case you want to set them back to 0, or “seed” them with a certain number of initial clicks, for example.

This window will contain Comment options when those are finished (i.e., the ability to view comments, add “fake” comments). We will extend these capabilities as needed. On the Player side, users will have the option to read comments and add their own, if comments are enabled for Shareable Items.

When the Commenting basics are finished and that version deployed (2.5, by the end of this month), our team will need to work on some User Documentation for the Social features. Initially we may put a page or two on the FAQ before creating a more in-depth explanation, expanding on the FAQ already available at http://dpte.polisci.uiowa.edu/dpte/pages/faq.jsp#social.

We are excited about these new features and the potential they open up for broadening studies using DPTE. Stay tuned for more!

IMPORTANT: Bug in DPTE Datasets (since corrected) MAY require action

The following email was sent to all DPTE users in our system on May 24. Please use this thread to comment on the issue or ask questions.

We have just learned from our software development team about a bug in the DPTE database that MAY have an impact on your data. It is important that you take a look at this email and consider whether it has any impact on you.

Let us first note that there is no issue with regard to properly recording what happens when you run “real” subjects through the system. Data are being stored as expected and as documented.

Second, if you have NOT run any experiments using DPTE yet, you can ignore this email. The bug referred to here has been corrected and all experiments going forward will not have this problem. So in that case, save yourself time and don’t read this long email.

However, if you HAVE run ANY studies up until today, please note the following:

In the specific case where TEST subjects have been run using the TEST IN PLAYER function (on the Experiment Setup screen) AND the test subject was run entirely through to the end of your experiment – that is, the test subject is completed – the system is failing to properly record this as a test subject. Instead, in this limited case, the test subject ends up in the database as if it were a regular completed subject.

THIS ONLY OCCURS IF YOU RUN THE TEST SUBJECT ALL THE WAY THROUGH TO THE END OF YOUR STUDY – THAT IS YOU COMPLETE THE STUDY WITH THE TEST SUBJECT. It does NOT occur if you partially run a test subject. Such partially run subjects (closing before the end of the study) are NOT recorded as real subjects, and are NOT downloaded when you download your data.

The result of this bug is that ANY TEST SUBJECT RUN ALL THE WAY TO COMPLETION will be included in the data you download as if it were a “real” subject.

What you should do:

You should examine any data you have downloaded to look for unexpected subjects. This will be simple if you ran “real” subjects on specific days or times, and ran “test” subjects on other dates. In this case you simply need to delete the incorrect subjects by date.

If you intermingled test and real subjects or don’t know what dates you ran real subjects, you will probably be able to determine which ones are TEST subjects by looking at the data. If you are like us, your test subjects have clearly invalid data in them. In particular, look at the rectangular dataset since there is one record per subject in each datafile there.

You can then delete those subjects from any data you have downloaded. Remember, any data to be deleted would have to be deleted from ALL datasets – subject data, recorded events, and the rectangular datasets.

We can have the programmers fix your data in the system if you find any test subjects. You will need to send us the subject ID and the date for each subject to be deleted. We HIGHLY recommend this since it will correct your data permanently in case you re-download it at any time. Please let us know if you find problem subjects and would like to have them removed.

If you have any questions or problems please use the new blog at https://dynamicprocesstracing.wordpress.com to ask the question so others can see it and respond. That way we can share any problems in addressing this issue.

We certainly appreciate your interest in DPTE and we’re sorry for the inconvenience. We are hopeful this will have a minimal effect since most testing is not done with the test subject run all the way through to the end. For those who signed on to DPTE early, we especially appreciate what you’ve done to help us improve the system. There have been a number of recent updates and enhancements. Keep an eye out later this summer as we will be releasing a major update that will create a set of “social experiment” functions which we think will be quite interesting.

Again, thanks for your involvement and we hope you are finding DPTE to be a valuable research tool.

DPTE Experiment in the Field Now

Hi Everyone,

My name is Tessa Ditonto and I’m a PhD candidate in political science at Rutgers University. I’m currently collecting data for my dissertation using DPTE. My experiment simulates a presidential election in which I vary certain aspects of the physical appearance of the candidates in the race. I’m interested in how appearance heuristics (like race, gender, and how competent a person looks) influence information search, as well as how these kinds of cues influence candidate evaluation and vote choice when information search is taken into account.

This is the third DPTE experiment I’ve worked on so far. I’ve really enjoyed getting to know the program (it’s starting to feel like an old friend at this point!) and seeing all of the interesting ways that scholars are using it around the world. I’m excited to have a venue now where we can all keep in touch about our work!

Tell us what you’re doing!

We’d love to know how you are using DPTE. Not only would we love to know, but we’d like to tell our funders about the exciting work going on. So if you are running a DPTE study now, or have completed one, or are PLANNING one, please reply to this post with some description of what you are doing. You can write as little or as much as you’d like. We’d just like to get a sense of the ways in which people are making use of DPTE. THANKS in advance!