2018년 9월 17일 월요일

9/17 summary

현지
  • 전남 방언의 장모음 비율 다시 조사 (사전, 코퍼스)
  • 전사 파일을 바탕으로 첫음절에 고모음이 들어있는 단어 추출 (마가렛 도움), 그 단어들의 초성이 장모음인지 여부 코딩, 해당 음성파일을 프랏으로 본 후 고모음이 무성음화 된 비율 조사
마가렛
  • 지효씨와 상의해서 HPP 실험자극 sampling rate 점검
  • 아기들의 monosyllable segmentation 을 6개월 혹은 7.5개월, 언제 시작하면 좋을지 문헌조사

2018년 9월 11일 화요일

Zepman coding (마가렛)

Segmentation_Exp1
I've spent the last 2 days getting to know the Zepman coding language and experiment set-up. There are a lot of little problems I ran into, and here's how I managed to fix them. Most of my coding is not the most very elegant, but it's working as of now! Things that need to still be addressed are highlighted.


  1. You can't use 한글 in any of the code. Our stimuli files had 한글 names and it only ran after I changed them to romanized characters.
  2. The sampling rate of the output device is set in a package that is not easily accessible (under program files->zep->2.0->modules). It is one of the the files with playback or std_ in it. In our case the sampling rate of all our stimuli were not the same, so some became very high pitched and some became very low when running them in the experiment even though they sounded fine in praat. Fixed the problem by resampling all the stimuli in praat to the rate in the zepman file (48000). Because this is a global value for the zepman program, I don't think it's possible to change it for each experiment (perhaps if you write an extra script, but I'm not that fancy yet). It seems easier to just makes sure all the stimuli you use are sampled at  48000Hz.
  3. All the script I used was based on the code provided on the zepman website for an infant headturn procedure. Within an experiment folder, you will find the broad script for the task (in our case called segmentation), a stimuli folder, and a folder for each phase of your experiment. There are a few other less important folders in there too. Within the folder for each phase, you will find broadly these scripts: "task", "test_handler",  "stimuli", "def", "output". 
  4. The task I was setting up has a familiarization and a test phase. The mechanics of each phase are broadly the same: a sound and red dot appear on the left or right and the time that the child attends to it must be tracked. If the child looks away for more then 2 seconds, the trial ends. The difference between the phases is that during familiarization we must track online the amount of time that the child is actively looking at the item because once they have amassed 30 seconds of looking to a particular familiarization item, that item does not need to be played again. So, the test_handler script is identical in both phases, but all other scripts had to be tweaked slightly
  5. STIMULI SCRIPT: This is straight forward, just rename sound files to the stimuli you want to use. They must match the names of the files in your stimuli folder. The index number of the items here is important at least for the familiarization script. I kept the index number consistent with the item even though it was repeated many times. Having a consistent index number makes it possible to make a running count of time spent on the item in the task script. For now there are 4 versions of the task, and the order of testing items is not randomized by participant (i.e. all children who gets order 1 will get the testing items in the same order, although the items themselves are pseudorandomized within the testing phase).
  6. DEF SCRIPT: Here you can change the number of trials you want to run or how long the child is looking away before to end the trial. Currently both testing and familiarization have 12 trials, but we need to discuss if we want to create more trials for familiarization incase infants don't amass enough orientation time within those trials. It can be a huge number since it automatically jumps to testing once enough time is amassed. UPDATE: We changed it to about 30 trials for familiarization just in case we have a fussy baby--however, in general they need about 6 trials to amass enough time to jump to testing (~3 on each item).
  7. TASK SCRIPT: Most of the changes to the TASK script in the familiarization folder were to make sure that once 30 seconds of active looking time was amassed for an item that it was no longer played, and similarly if both familiarization items amassed 30s of looking time that it jumped to the testing phase.  If a child amasses 30s of looking time, but the trial isn't over yet and the child is still looking, then the trial continues in it's entirety or until the child looks away.
  8. Another note on the stimuli. The task is randomized such that for ORDER1 and ORDER2 the words familiarized are 국 and 밥 but for ORDER 3 and ORDER4 the words familiarized are 컵 and 책. The difference between ORDER1 and ORDER2 is that in ORDER1 the first passage presented is 국 but in ORDER2 the first word presented is 밥 and then it alternates until they amass 30sec. Similarly ORDER3 starts with 책 but ORDER4 starts with 컵. Order of test items are different for all versions. 
UPDATE: 3 other experiments are not also programmed in Zepman--mostly based off of the segmentation_exp1 script. The changes/task design are described below:


Disyllable_segmentation_exp2
This task is literally a copy and pasted version of the above described task, except the stimuli are 2 syllables long. The passage and word lists are slightly longer (25sec in 2nd exp vs about 20 sec in the 1st exp) although the number of words in each word list is the same (20 repetitions). There fore we also wanted to give the infants a little more exposure time during the familiarization phase, so we upped the time necessary to amass to move on to testing. How to do this was described below:
  1. In the familiarization folder, open Task script. Everywhere where it says 30000 change it to 45000. There should be 4 places to make the change. All of them are conditions of an if loop involving either "item1time" or "item2time".  These are parts of the script that tell the program to check if the child has amassed enough looks to the target word to jump to testing or not.
  2. A note on stimuli randomization. ORDER1 and ORDER2 familiarize participants on the trochaic passages while ORDER3 and ORDER4 familiarize participants on the iambic passages. ORDER1 and ORDER3 start with 부자 and ORDER2 and ORDER4 start with 고대/고데. Test items are ordered differently for all versions.
Stress_Pattern_Preference
This task is slightly different in that it doesn't technically have a "familiarization phase", but rather it has about 4 practice items to get the child familiar with whether iambic or trochaic comes out of the left or right speaker. However I left this portion labelled "familiarization" because it was convenient. Based off of the original segmentation script the changes made were:
  1. For this task trochaic items always come out of one speaker, and iambic items always come out of the other speaker. Therefore instead of making speaker direction random, we needed to make it fixed. This was done in both the test and familiarization folders by opening the test_handler script and adding the following code to make the images contingent on the status of the item:
if (item.id == 1)
    {
                      if (item.type == TROC)
        {
          lightpos = LEFT_SIDE;
            }
                              if (item.type == IAMB)
                                {
                                      lightpos = RIGHT_SIDE;
                                    }
                      }
                        if (item.id == 2)
                          {
                                if (item.type == TROC)
                              {
                                lightpos = RIGHT_SIDE;
                                  }
                                                    if (item.type == IAMB)
                                                      {
                                                            lightpos = LEFT_SIDE;
                                                          }

                                              }
                                              1.  Originally this was working such that I could associate the experiment version (eg ORDER1) with having trochaic on one side and a different version with having trochaic on the other side. But then it stopped working and I couldn't figure out why. So instead I set the item id for the items in ORDER1 and ORDER2 to 1 and in ORDER3 and ORDER4 to 2. This way trochaic items will be associate with the left for the first 2 versions of the experiment and the right for the second two versions of the experiment. Originally item ID is a way to identify items, but we can just use the name of the wav file for the same purpose since we are not repeating any items in this experiment.
                                              2. We wanted to turn off the red light during the sound presentation for the practice items but not testing items. This was mentioned in Jusczyk, Cutler & Redanz (1993) as a way to make sure children are associating speaker direction with the sound and not the light. This was done by going into the familiarization folder and opening the "task_handler" script and adding this line of code:

                                                stop_light(FRONT_SIDE | LEFT_SIDE | RIGHT_SIDE)

                                                under the part of the script that signaled the audio to start. This appears in two places--under both "state looking" and "state not_looking"
                                              3. A note on stimuli randomization. ORDER1 and ORDER2 had the trochaic items play out of the left speaker and ORDER3 and ORDER4 had the iambic items play out of the left speaker. ORDER1 and ORDER3 started with a trochaic item and ORDER2 and ORDER4 started with an iambic item. I used Word List 1 and 2 for both trochaic and iambic practice items for all participants, then the order of test items that followed was pseudo randomized such that the ORDER1 and ORDER3 were the same and ORDER2 and ORDER4 were the same. Word lists were made such that items in Trochaic_List1 and Trochaic_List2 were the same but in a different order.
                                              Music_Preference
                                              We started off with a 350Hz practice tone once in each speaker to familiarize the participants with how the lights work. Then we move on to practice items which are not tied to any specific speaker. Test items are completely random, just like the original segmentation task.

                                              1. Looks like I actually went to the effort of renaming the folder practice rather than familiarization. Only 2 items are listed for the practice items and both are the same 350Hz tone. In order to ensure that they get one tone on each side, I went into the DEFS script and changed the MAX_SAME_SIDE count to 1 so that the speakers are forced to alternate.
                                              2. Order of testing items is random for all orders, with the small caveat that the first item is a different stimuli type for each order (ORDER1 start with KC, ORDER2 starts with WC, ORDER3 starts with KH, ORDER4 starts with WH). For this and all the other experiments, when doing pseudorandomization, I would do it in chunks of 4. So for example for this experiment items 1-4 items contained the all 4 conditions in random order, and items 5-8 contained all conditions but in a different random order all the way until item 16. I was careful not to repeat the same condition right after each other for this experiment. For other experiments where there are only 2 conditions for I tried to make sure there weren't more than 2-3 trials in a row from the same condition, but also made sure not the just alternate back and forth between the conditions. 
                                              Music_preference_older
                                              This is the same as the Music_preference task but instead of a light to attract infants attention, a moving checkerboard is used to attract attention. This was done for the practice and testing phase by copy and pasting a big chunk of code in the left_page and right_page script within each folder. The code begins with "ImageShape". Rather than a circle appearing, it has an image appear. You can compare the script between music preference and music preference older to see the change. It's near the top of the script. (I did not write the script--Theo Veenker emailed it)

                                              CDI
                                              This experiment is more similar to the existing IPLP/Moma experiments since it happens all on the front screen. Some changes include:
                                              1. Moving stimuli images. This is virtually the same code that was added to the older Music experiment except this time it was added to the test_page script. And it was added in twice, once for the left image and once for the right image. Search for for "ImageShape" within the script to see where this code is.
                                              2. Breaks after every set of 4 stimuli. Doing this took 2 steps. One was to divide up the stimuli into sets of 4. I made 10 copies of the test folder and only included 4 stimuli in the stimuli script in each folder. This experiment only has one version so it wasn't to complicated. The other step was to edit the main zepman script for the experiment (VisualCDI.zp) in the main folder. I had to import all the test folders and tell the experiment to run each of them separately. Then in between each test run, I added a pause that played a jingle and a moving image. The image was taken off google images and the background removed in paint3D. I made sure it was not an image that was included in the CDI testing list. The code for the pause was copy and pasted from somewhere, but basically all you have to change is change the name of the wav file for the jingle and the png file for the image for each break. The pause is directly coded into the main experiment file and doesn't need an extra folder or anything.

                                              Particle_Marker
                                              Copied the stress-preference experiment and modified it. The differences are:

                                              1. Rather than a familiarization phase, there is a phase called 'practice' which just plays 2 neutral passages (여기봐~무슨 소리가 나네....etc). One passage plays out of each speaker. The only changes for this were putting the file names into the "stimuli" script under the practice folder
                                              2. The image on the side screens was also changed to a checkerboard just like it was for the music preference task. This was done the in the same way--by copying and pasting the chunk of "ImageShape" code over into the left_page and right_page script in the practice and test folders.
                                              3. We also wanted breaks every 4 passages for this test. To do this I made 4 copies of the test folder and put a fourth of the stimuli in the stimuli script in each folder. I changed the labeling of the items from iambic and trochaic to grammatical and ungrammatical in the stimuli script. I had to make sure to change two instances of IAMB and TROC to GRAM and UNGRAM in the test_handler script as well so that association of one one side and one condition happened correctly. Also both stimuli and test_handler scripts in the practice folder were updated with the GRAM and UNGRAM labels.
                                              4. Finally breaks were added in where an image appears and a jingle plays between each testing set. This was done identically to the CDI with the exception that since this experiment uses 3 screens while the CDI only uses one, I had to add in 3 lines of code which specified which screen the fun image was supposed to be on and which screens were blank:
                                                test_window1.show_page(image_page);
                                                test_window2.show_page(blank_page);
                                                test_window3.show_page(blank_page);

                                                Additionally the code identifying some specifics about image timing wasn't working, so I used a simpler code that's commented out on the CDI version instead (anything with // in front of it isn't actually run--compare CDI and this script to see differences). Search for "image_page.action" to find this code. The code for the pauses was added in 4 times into the main ParticleMarker script. The only difference between them is the name of the png and wav files.
                                              There are 4 versions of this task. For order 1 and 2 and grammatical images appear on the right and the ungrammatical on the left. For order 3 and 4 it is the opposite. The difference between order 1 and 2 is the order of the passages. Same for order 3 and 4. 

                                              A note on this experiment (and the CDI): It creates a separate set of output files for each part of the test (so 4 test+ one for practice for the particle marker experiment--many more for the CDI experiment). We'll just have to read all of them in together when doing data analysis. Just be aware that there are going to be A LOT of output files with very little data in each.


                                              Word_Order
                                              Identical script to the particle marker but with different stimuli and different images at the breaks. Names of wav files were changed in each stimuli script. Images were found online and edited in paint3D to remove the background.

                                              EXTRA NOTE: Pertaining to all tasks I've programmed so far, in addition to setting the sample rate of all stimuli items to 48000Hz, I scaled the intensity of all items to 60Hz. We (try) not to touch the dials on the speakers too much in the testing booths, and from my impression when we last checked the sound, items with 70dB was a bit loud, so I set all the stimuli to 60Hz. We set the speaker to 70dB using a sound meter in the booth based on a youtube video which just played a standard 350Hz tone. I realize now that if the volume setting on the youtube video was anything but full we might not be able to replicate the same level, but for now as long as it's not touched it should be ok.
                                              EDIT: Sample rate can be changed under the modules folder if there is a script called sound_settings.zm. The code is
                                              const Sample Rate PLAYBACK_SAMPLE_RATE = RATE_48000;
                                              I haven't tried touching this though.

                                              Extra note 2: With the update to Zepman 2.3, there were 2 changes we had to make. One is at the beginning of the main script for each experiment, we had to add "requires 2.3". The other is that now some of the older experiment require you to put in participant details (sex, birthday) before it lets you run the experiment. When it prompts for this we've just been putting in the default data. To do this, when you have the participant you want to run selected, click edit and then check the boxes for all details it prompts for. We haven't been entering any actual information here. For experiments I created after the update, I removed the prompting for these characteristics. This was under the modules folder attributes.zm. I just deleted all attributes. For some of the older experiments it may still be prompting. Just click the check boxes.