2018년 12월 18일 화요일

How to import LENA Data

When we get a LENA DLP (digital language processer) in the mail these are the steps to save the data:
  1. First write the number of the DLP on the LENA 녹음 활동 기록표. The number is on the back of the recorder under a barcode. This is just incase we get multiple packages at once and the recorder are switched before we manage to download any data.
  2. Open "Launch LENA" (코끼리 아이콘) from the desktop.
  3. Turn the DLP on by holding down the power button.
  4. Plug the DLP into the cord right behind my keyboard, it should be connected to a USB port on the back of my computer.
  5. HOPEFULLY** the LENA software will pop up with a box asking you to choose the child that the recording belongs to. If this is the first time this child visited, click "Add" (or it might be "create child" or "new"...in any case, do not click "assign" because that assigns the recording to a child that already has been created).
  6. It will prompt you for information about the child. The method I use is First Name = [participant number] (e.g. P90--get from google calendar), Last Name = [experiment run], (e.g. Segmentation or IPLP--also on google calendar), Birth Date = [month][day][year], Gender = [male/female]. Click create/enter or whatever the button says. See below for the naming system now that multiple experiments are running.
  7. Now you click the "Assign" button (I think it's actually "assign child" or "assign DLP"). It will prompt you if you are sure, make sure the information is correct and click ok. Then it will prompt you to choose the time zone. It is usually set as Seoul automatically so just click OK. Then the download should begin.
  8. The download time only takes about 30 seconds, then it begins to process the file. You can remove the DLP once the file is downloaded but while it is still processing. The screen will tell you if you can or cannot remove the DLP. Go to "My computer" and eject the DLP (it is usually called something like LENA [DLP number] eg LENA 014422). Charge the DLP in the wall charger. DO NOT CLOSE THE LENA SOFTWARE OR TURN OFF THE COMPUTER. The software must stay on to finish processing a file. It will take about 1.5 hours for a 16 hour recording.
  9. Finally update the LENA participants spreadsheet on the googledrive with all the information that is written on the LENA 녹음 활동 기록표. Fill in the address from the package received, and get the participant number from the google calendar.
  10. Put the log in the yellow folder above my computer that says "LENA 녹음 활동 기록표" on the side.
  11. ALL DONE!
**Troubleshooting: Occasionally the LENA software does not recognize a recorder when it is plugged in. So far I've tried a combination of restarting the computer, turning the DLP on and off, and plugging the USB into a different port. So far, changing the USB port seems to be the most effective so just try out other ports in both the front and back of the computer..

Naming System within LENA
The chart below shows which items uniquely identify participants from the multiple experiments we are running. Since some children participate in multiple experiments, it's important that we can group them correctly once it's time to analyze the data.

Last Name
First Name
Grouping
IPLP
Music (Mxx)
Word Teaching
Segmentation 1
CDI
 Moma


Segmentation 2

For example if a child participated in IPLP and Music, his/her last name would be IPLP-Music, first name would be Mxx-Ixx, and there would be no grouping
For a child that participates in IPLP, Moma, and Music, Last name would be IPLP-Music-Moma, first name would be Mxx-Pxx-Ixx, and Grouping would be Moma.
Participants who did segmentation 1 and 2 have a last name of Seg1-2, first name of Sxx(Sxx) where the first S is their number in seg 1 and the number in parenthesis is their number in seg 2. Participants who started with Seg 2 have only their participant number from Seg 2 listed.
We are running into cases where children have done more than 3 studies. I've been prioritizing Seg 1, Seg 2, and CDI for the labeling, and if they did anything else I just add it at the end of their first and last name.

Unfortunately the LENA software does not allow grouping of kids into more than one group, or this would be much simpler.

2018년 11월 21일 수요일

연령별 실험 시행 표


music intake questionnaire (margarethe?)
9 months: English learning 9 month olds prefer trochaic rhythmic but not 6 months (Juszcyk, Cutler, Redanz 1993)

Will Chonnam infants prefer trochaic or iambic?

  • segmentation 1 7.5 months (ideally 7:0~8:0)
  • word teaching    9~12 months (first a reading, then a spontaneous session; counterbalance the order of ADS/IDS sessions)
  • segmentation 2 10.5 months (ideally 10:0~11:0)
  • visual CDI 13.5 months (ideally 13:0~14:0)
  • syntax 17 months?

music perception: all ages (if the available baby does not fall within the age range above).

If the child's age closely misses the target age of the language studies, include them in the language study rather than music.

Age (mo) Study Method
LENA SES online
7 segmentation1 HPP seg1 yes when 18 months
8 word teachingrecording yes
9 word teachingrecording yes
10segmentation2 HPP seg2 yes
11 word teaching recording
yes
12 word teaching recording yes
13
sound symbolism
visual CDI
IPLP buba, momayes
14 visual CDI IPLP buba, moma yes
15music perception HPP buba, moma yes
16 music perception HPP music moma yes
17 syntax HPP syntax moma yes
18 music perception HPP music moma yes
19 music perception HPP music moma yes
20 visual CDI IPLP moma yes
21 music perception HPP music moma yes
22 music perception HPP music moma yes
23 music perception HPP music moma yes
24 music perception HPP music moma yes
25 visual CDI IPLP moma yes

2018년 10월 14일 일요일

10/15 summary

이번 주 LENA software install
install 이후 페이스북 광고(지효) - 총 70명 목표
LENA 조끼는 각 연령별로 4 pairs(8벌) 6-9개월, 9-12개월(지효)

phoneme perception paper research - lenis/fortis
monolingual & bilingual phoneme discrimination research - 마가렛

music perception test 계획 - 12개월 전후 infants 대상(age range target이 아닌 아기)
한국음악, 서양음악, 해금, 첼로
20 music pieces
Korean song 5, German song 5 - 4 secs

SELSI 데이터 정리(지효)

"너무" intensifier project
-perception side test
-how universal
-emotion & prosody

2018년 10월 9일 화요일

10/8 summary

<Paper> 김현지
Early Word Segmentation in Naturalistic Environments: Limited Effects of Speech Register

<LENA>
-사이트에서 구매하기
-갖고 있는 옷에 주머니 이용해 뚜껑 달기
-LENA 조끼 디자인대로 두꺼운 옷감으로 만들기 *김지효
-택배 조사하기 *김현지
-택배에 작은 상자가 가능한지 조사하기 *김지효

<IPLP>
-남은 파일 코딩 *김지효
-데이터 분석 *마가렛

2018년 10월 7일 일요일

LENA recording instruction

<더 생각해볼 점>
  • 녹음날 시간별 활동 표시 로그 기록
  • 녹음기 8개, 옷 7벌(0-3개월(1), 3-6개월(1), 6-9개월(2), 12-18개월(3))
    • 조끼 구입 https://shop.lena.org/collections/clothing
  • 택배용 봉투, 택배/우체국, 선불방법 조사

<참고문헌>


2018년 9월 17일 월요일

9/17 summary

현지
  • 전남 방언의 장모음 비율 다시 조사 (사전, 코퍼스)
  • 전사 파일을 바탕으로 첫음절에 고모음이 들어있는 단어 추출 (마가렛 도움), 그 단어들의 초성이 장모음인지 여부 코딩, 해당 음성파일을 프랏으로 본 후 고모음이 무성음화 된 비율 조사
마가렛
  • 지효씨와 상의해서 HPP 실험자극 sampling rate 점검
  • 아기들의 monosyllable segmentation 을 6개월 혹은 7.5개월, 언제 시작하면 좋을지 문헌조사

2018년 9월 11일 화요일

Zepman coding (마가렛)

Segmentation_Exp1
I've spent the last 2 days getting to know the Zepman coding language and experiment set-up. There are a lot of little problems I ran into, and here's how I managed to fix them. Most of my coding is not the most very elegant, but it's working as of now! Things that need to still be addressed are highlighted.


  1. You can't use 한글 in any of the code. Our stimuli files had 한글 names and it only ran after I changed them to romanized characters.
  2. The sampling rate of the output device is set in a package that is not easily accessible (under program files->zep->2.0->modules). It is one of the the files with playback or std_ in it. In our case the sampling rate of all our stimuli were not the same, so some became very high pitched and some became very low when running them in the experiment even though they sounded fine in praat. Fixed the problem by resampling all the stimuli in praat to the rate in the zepman file (48000). Because this is a global value for the zepman program, I don't think it's possible to change it for each experiment (perhaps if you write an extra script, but I'm not that fancy yet). It seems easier to just makes sure all the stimuli you use are sampled at  48000Hz.
  3. All the script I used was based on the code provided on the zepman website for an infant headturn procedure. Within an experiment folder, you will find the broad script for the task (in our case called segmentation), a stimuli folder, and a folder for each phase of your experiment. There are a few other less important folders in there too. Within the folder for each phase, you will find broadly these scripts: "task", "test_handler",  "stimuli", "def", "output". 
  4. The task I was setting up has a familiarization and a test phase. The mechanics of each phase are broadly the same: a sound and red dot appear on the left or right and the time that the child attends to it must be tracked. If the child looks away for more then 2 seconds, the trial ends. The difference between the phases is that during familiarization we must track online the amount of time that the child is actively looking at the item because once they have amassed 30 seconds of looking to a particular familiarization item, that item does not need to be played again. So, the test_handler script is identical in both phases, but all other scripts had to be tweaked slightly
  5. STIMULI SCRIPT: This is straight forward, just rename sound files to the stimuli you want to use. They must match the names of the files in your stimuli folder. The index number of the items here is important at least for the familiarization script. I kept the index number consistent with the item even though it was repeated many times. Having a consistent index number makes it possible to make a running count of time spent on the item in the task script. For now there are 4 versions of the task, and the order of testing items is not randomized by participant (i.e. all children who gets order 1 will get the testing items in the same order, although the items themselves are pseudorandomized within the testing phase).
  6. DEF SCRIPT: Here you can change the number of trials you want to run or how long the child is looking away before to end the trial. Currently both testing and familiarization have 12 trials, but we need to discuss if we want to create more trials for familiarization incase infants don't amass enough orientation time within those trials. It can be a huge number since it automatically jumps to testing once enough time is amassed. UPDATE: We changed it to about 30 trials for familiarization just in case we have a fussy baby--however, in general they need about 6 trials to amass enough time to jump to testing (~3 on each item).
  7. TASK SCRIPT: Most of the changes to the TASK script in the familiarization folder were to make sure that once 30 seconds of active looking time was amassed for an item that it was no longer played, and similarly if both familiarization items amassed 30s of looking time that it jumped to the testing phase.  If a child amasses 30s of looking time, but the trial isn't over yet and the child is still looking, then the trial continues in it's entirety or until the child looks away.
  8. Another note on the stimuli. The task is randomized such that for ORDER1 and ORDER2 the words familiarized are 국 and 밥 but for ORDER 3 and ORDER4 the words familiarized are 컵 and 책. The difference between ORDER1 and ORDER2 is that in ORDER1 the first passage presented is 국 but in ORDER2 the first word presented is 밥 and then it alternates until they amass 30sec. Similarly ORDER3 starts with 책 but ORDER4 starts with 컵. Order of test items are different for all versions. 
UPDATE: 3 other experiments are not also programmed in Zepman--mostly based off of the segmentation_exp1 script. The changes/task design are described below:


Disyllable_segmentation_exp2
This task is literally a copy and pasted version of the above described task, except the stimuli are 2 syllables long. The passage and word lists are slightly longer (25sec in 2nd exp vs about 20 sec in the 1st exp) although the number of words in each word list is the same (20 repetitions). There fore we also wanted to give the infants a little more exposure time during the familiarization phase, so we upped the time necessary to amass to move on to testing. How to do this was described below:
  1. In the familiarization folder, open Task script. Everywhere where it says 30000 change it to 45000. There should be 4 places to make the change. All of them are conditions of an if loop involving either "item1time" or "item2time".  These are parts of the script that tell the program to check if the child has amassed enough looks to the target word to jump to testing or not.
  2. A note on stimuli randomization. ORDER1 and ORDER2 familiarize participants on the trochaic passages while ORDER3 and ORDER4 familiarize participants on the iambic passages. ORDER1 and ORDER3 start with 부자 and ORDER2 and ORDER4 start with 고대/고데. Test items are ordered differently for all versions.
Stress_Pattern_Preference
This task is slightly different in that it doesn't technically have a "familiarization phase", but rather it has about 4 practice items to get the child familiar with whether iambic or trochaic comes out of the left or right speaker. However I left this portion labelled "familiarization" because it was convenient. Based off of the original segmentation script the changes made were:
  1. For this task trochaic items always come out of one speaker, and iambic items always come out of the other speaker. Therefore instead of making speaker direction random, we needed to make it fixed. This was done in both the test and familiarization folders by opening the test_handler script and adding the following code to make the images contingent on the status of the item:
if (item.id == 1)
    {
                      if (item.type == TROC)
        {
          lightpos = LEFT_SIDE;
            }
                              if (item.type == IAMB)
                                {
                                      lightpos = RIGHT_SIDE;
                                    }
                      }
                        if (item.id == 2)
                          {
                                if (item.type == TROC)
                              {
                                lightpos = RIGHT_SIDE;
                                  }
                                                    if (item.type == IAMB)
                                                      {
                                                            lightpos = LEFT_SIDE;
                                                          }

                                              }
                                              1.  Originally this was working such that I could associate the experiment version (eg ORDER1) with having trochaic on one side and a different version with having trochaic on the other side. But then it stopped working and I couldn't figure out why. So instead I set the item id for the items in ORDER1 and ORDER2 to 1 and in ORDER3 and ORDER4 to 2. This way trochaic items will be associate with the left for the first 2 versions of the experiment and the right for the second two versions of the experiment. Originally item ID is a way to identify items, but we can just use the name of the wav file for the same purpose since we are not repeating any items in this experiment.
                                              2. We wanted to turn off the red light during the sound presentation for the practice items but not testing items. This was mentioned in Jusczyk, Cutler & Redanz (1993) as a way to make sure children are associating speaker direction with the sound and not the light. This was done by going into the familiarization folder and opening the "task_handler" script and adding this line of code:

                                                stop_light(FRONT_SIDE | LEFT_SIDE | RIGHT_SIDE)

                                                under the part of the script that signaled the audio to start. This appears in two places--under both "state looking" and "state not_looking"
                                              3. A note on stimuli randomization. ORDER1 and ORDER2 had the trochaic items play out of the left speaker and ORDER3 and ORDER4 had the iambic items play out of the left speaker. ORDER1 and ORDER3 started with a trochaic item and ORDER2 and ORDER4 started with an iambic item. I used Word List 1 and 2 for both trochaic and iambic practice items for all participants, then the order of test items that followed was pseudo randomized such that the ORDER1 and ORDER3 were the same and ORDER2 and ORDER4 were the same. Word lists were made such that items in Trochaic_List1 and Trochaic_List2 were the same but in a different order.
                                              Music_Preference
                                              We started off with a 350Hz practice tone once in each speaker to familiarize the participants with how the lights work. Then we move on to practice items which are not tied to any specific speaker. Test items are completely random, just like the original segmentation task.

                                              1. Looks like I actually went to the effort of renaming the folder practice rather than familiarization. Only 2 items are listed for the practice items and both are the same 350Hz tone. In order to ensure that they get one tone on each side, I went into the DEFS script and changed the MAX_SAME_SIDE count to 1 so that the speakers are forced to alternate.
                                              2. Order of testing items is random for all orders, with the small caveat that the first item is a different stimuli type for each order (ORDER1 start with KC, ORDER2 starts with WC, ORDER3 starts with KH, ORDER4 starts with WH). For this and all the other experiments, when doing pseudorandomization, I would do it in chunks of 4. So for example for this experiment items 1-4 items contained the all 4 conditions in random order, and items 5-8 contained all conditions but in a different random order all the way until item 16. I was careful not to repeat the same condition right after each other for this experiment. For other experiments where there are only 2 conditions for I tried to make sure there weren't more than 2-3 trials in a row from the same condition, but also made sure not the just alternate back and forth between the conditions. 
                                              Music_preference_older
                                              This is the same as the Music_preference task but instead of a light to attract infants attention, a moving checkerboard is used to attract attention. This was done for the practice and testing phase by copy and pasting a big chunk of code in the left_page and right_page script within each folder. The code begins with "ImageShape". Rather than a circle appearing, it has an image appear. You can compare the script between music preference and music preference older to see the change. It's near the top of the script. (I did not write the script--Theo Veenker emailed it)

                                              CDI
                                              This experiment is more similar to the existing IPLP/Moma experiments since it happens all on the front screen. Some changes include:
                                              1. Moving stimuli images. This is virtually the same code that was added to the older Music experiment except this time it was added to the test_page script. And it was added in twice, once for the left image and once for the right image. Search for for "ImageShape" within the script to see where this code is.
                                              2. Breaks after every set of 4 stimuli. Doing this took 2 steps. One was to divide up the stimuli into sets of 4. I made 10 copies of the test folder and only included 4 stimuli in the stimuli script in each folder. This experiment only has one version so it wasn't to complicated. The other step was to edit the main zepman script for the experiment (VisualCDI.zp) in the main folder. I had to import all the test folders and tell the experiment to run each of them separately. Then in between each test run, I added a pause that played a jingle and a moving image. The image was taken off google images and the background removed in paint3D. I made sure it was not an image that was included in the CDI testing list. The code for the pause was copy and pasted from somewhere, but basically all you have to change is change the name of the wav file for the jingle and the png file for the image for each break. The pause is directly coded into the main experiment file and doesn't need an extra folder or anything.

                                              Particle_Marker
                                              Copied the stress-preference experiment and modified it. The differences are:

                                              1. Rather than a familiarization phase, there is a phase called 'practice' which just plays 2 neutral passages (여기봐~무슨 소리가 나네....etc). One passage plays out of each speaker. The only changes for this were putting the file names into the "stimuli" script under the practice folder
                                              2. The image on the side screens was also changed to a checkerboard just like it was for the music preference task. This was done the in the same way--by copying and pasting the chunk of "ImageShape" code over into the left_page and right_page script in the practice and test folders.
                                              3. We also wanted breaks every 4 passages for this test. To do this I made 4 copies of the test folder and put a fourth of the stimuli in the stimuli script in each folder. I changed the labeling of the items from iambic and trochaic to grammatical and ungrammatical in the stimuli script. I had to make sure to change two instances of IAMB and TROC to GRAM and UNGRAM in the test_handler script as well so that association of one one side and one condition happened correctly. Also both stimuli and test_handler scripts in the practice folder were updated with the GRAM and UNGRAM labels.
                                              4. Finally breaks were added in where an image appears and a jingle plays between each testing set. This was done identically to the CDI with the exception that since this experiment uses 3 screens while the CDI only uses one, I had to add in 3 lines of code which specified which screen the fun image was supposed to be on and which screens were blank:
                                                test_window1.show_page(image_page);
                                                test_window2.show_page(blank_page);
                                                test_window3.show_page(blank_page);

                                                Additionally the code identifying some specifics about image timing wasn't working, so I used a simpler code that's commented out on the CDI version instead (anything with // in front of it isn't actually run--compare CDI and this script to see differences). Search for "image_page.action" to find this code. The code for the pauses was added in 4 times into the main ParticleMarker script. The only difference between them is the name of the png and wav files.
                                              There are 4 versions of this task. For order 1 and 2 and grammatical images appear on the right and the ungrammatical on the left. For order 3 and 4 it is the opposite. The difference between order 1 and 2 is the order of the passages. Same for order 3 and 4. 

                                              A note on this experiment (and the CDI): It creates a separate set of output files for each part of the test (so 4 test+ one for practice for the particle marker experiment--many more for the CDI experiment). We'll just have to read all of them in together when doing data analysis. Just be aware that there are going to be A LOT of output files with very little data in each.


                                              Word_Order
                                              Identical script to the particle marker but with different stimuli and different images at the breaks. Names of wav files were changed in each stimuli script. Images were found online and edited in paint3D to remove the background.

                                              EXTRA NOTE: Pertaining to all tasks I've programmed so far, in addition to setting the sample rate of all stimuli items to 48000Hz, I scaled the intensity of all items to 60Hz. We (try) not to touch the dials on the speakers too much in the testing booths, and from my impression when we last checked the sound, items with 70dB was a bit loud, so I set all the stimuli to 60Hz. We set the speaker to 70dB using a sound meter in the booth based on a youtube video which just played a standard 350Hz tone. I realize now that if the volume setting on the youtube video was anything but full we might not be able to replicate the same level, but for now as long as it's not touched it should be ok.
                                              EDIT: Sample rate can be changed under the modules folder if there is a script called sound_settings.zm. The code is
                                              const Sample Rate PLAYBACK_SAMPLE_RATE = RATE_48000;
                                              I haven't tried touching this though.

                                              Extra note 2: With the update to Zepman 2.3, there were 2 changes we had to make. One is at the beginning of the main script for each experiment, we had to add "requires 2.3". The other is that now some of the older experiment require you to put in participant details (sex, birthday) before it lets you run the experiment. When it prompts for this we've just been putting in the default data. To do this, when you have the participant you want to run selected, click edit and then check the boxes for all details it prompts for. We haven't been entering any actual information here. For experiments I created after the update, I removed the prompting for these characteristics. This was under the modules folder attributes.zm. I just deleted all attributes. For some of the older experiments it may still be prompting. Just click the check boxes.










                                              2018년 7월 24일 화요일

                                              intensifier - positive & negative lists


                                              • (+) POSITIVE
                                              • VERY 5000
                                                • 001, 054, 058, 060, 061 much
                                                • 002 high
                                                • 005 promising
                                                • 006 meaningful
                                                • 008 special
                                                • 011, 023 good
                                                • 012 influential
                                                • 013, 043close
                                                • 017, 019, 021, 027, 045, 057 nice
                                                • 018 kind
                                                • 033 happy
                                                • 037 cultural
                                                • 038 pop
                                                • 040 intensive
                                                • 041 positive
                                                • 044 sweet
                                                • 048 economically
                                                • 055, 056 interesting
                                                • 059 best
                                                • 061 similar
                                                • 065 reasonable
                                                • 066 pleasant
                                                • 069 cute
                                                • 071 clear



                                              • (-) NEGATIVE
                                              • VERY 5000
                                                • 004 dangerous
                                                • 007 weird
                                                • 009, 029, 034, 039, 042, 062 hard
                                                • 010 anti
                                                • 020 crowded
                                                • 024 different
                                                • 026 sore
                                                • 030 cold
                                                • 031 upsetting
                                                • 032 upset
                                                • 047 distraught
                                                • 052 heavy
                                                • 053 late
                                                • 064 poorly
                                                • 067 bad



                                              • (?) UNDEFINABLE 
                                              • VERY 5000
                                                • 003 closely
                                                • 014, 015, 016 muggy
                                                • 025 careful
                                                • 028 slowly
                                                • 035 dorm
                                                • 036 traditional
                                                • 046 briefly
                                                • 049 tentative
                                                • 063 unusual
                                                • 070 transitional

                                              2018년 7월 17일 화요일

                                              intensifier


                                              저의 목표는 이런 결과가 너무 아닌진짜” “정말등의 부사에서도 나타나는지, 나아가서 영어의 “really”, “very” 에서도 해당되는지, 언어 일반적 현상인지 아니면 한국어 혹은 너무 국한된 현상인지를 밝히는 것이예요

                                              실험자극
                                              (1) 한국어 너무, 진짜, 되게
                                              (2) 영어 really, very, so
                                              단어에 긍정어 10파일, 부정어 10파일 20파일

                                              실험대상
                                              (1) 한국인 30
                                              (2) 외국인 30 *영어 네이티브만??? *한국어/영어 모국어 아닌 외국인 (중국인 일본인 )???

                                              (1) 코퍼스 조사 한국어와 영어 intensifier 뒤 긍정어/부정어에 따른 pitch duration 차이 확인
                                              (2) 실험 한국인/외국인에게 2가지 실험자극 주고 긍정어/부정어 맞추기
                                              *외국인 - 한국 거주 기간, 한국어 자격증, 한국어 학습 기간, 스스로가 생각하는 한국어 유창함
                                              *한국인 토익점수(400점 이하, 400~700, 700~850, 850점 이상, 해당없음), 영어 학습 기간, 어학연수, 스스로가 생각하는 영어 유창함
                                              (3) 분석
                                              한국인-한국어자극, 미국인-영어자극 /// 한국인-영어자극, 미국인-한국어자극

                                              2018년 7월 2일 월요일

                                              LENA study plan

                                              07-02 ICIS
                                              S5.5 The language-learning environments of Latino infants from Spanish-speaking homes from birth to 36 months

                                              (1) Experiences over time
                                              -from birth to 2, N=138, 1mo, 6mo, 14mo, 24mo

                                              a. What was child's activity?
                                              b. Who was activity with?
                                              c. Who else was there?
                                              6 am - 2 pm

                                              -People over time
                                              -Activity over time: sleep, out of home activities, TV/media, caregiver, literacy, play
                                              -language input: literacy↓ play, feed ↑

                                              (2) Language across routines (language input, quantity)
                                              -video-recorded at home for 1-2 hours
                                              -coded each minute
                                              -fathers' language input ↑

                                              -code-switching in bilingual environment
                                              ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
                                              SES + bilingual(Dual Language Learner)
                                              30 million gap - quantity and quality

                                              shared bookreading

                                              6mo, 14mo, 24mo, stimQ
                                              reading quantity & quantity

                                              outcomes - 54mo

                                              parent reporting: have you ever read a book with your child?
                                              how many days/week did mothers report reading with child?
                                              when are parents reading to children?
                                              6 moths bookreading significantly expect 54mo expressive language.
                                              ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ

                                              SES and mother-child interaction

                                              https://books.google.co.kr/books?id=ZLyQAgAAQBAJ&dq=education+standin+ses

                                              2018년 6월 26일 화요일

                                              2018년 하반기 (김지효)


                                              • segmentation
                                                • 1번째 monosyllable
                                                  • 녹음 깔끔하게
                                                  • zepman 셋업
                                                • 2번째 bisyllable
                                              • word teaching strategies
                                                • tag-repetition 정의
                                                • 코딩 다시 정비
                                                • position 코딩(엑셀), repetition 코딩(엑셀), prosody 코딩(praat)
                                                • segmentation 함께 녹음
                                                  • word teaching task 정비
                                              • IPLP 분석 및 논문
                                                • 남아있는 코딩 하기
                                                • procedure 파트 작성
                                              • ManyBabies2: K M-B CDI

                                              • (새싹형) "너무" 분석 및 논문
                                              • (8월 17일까지) 전국 대학생 영어 코퍼스 경진대회 www.shobs.kw.ac.kr
                                                • 구텐베르크
                                                • 이전보다 분석한 작품 수 늘리기
                                                • 토픽모델링

                                              2018년 6월 24일 일요일

                                              segmentation experiment design


                                              • segmentation
                                                • (1) monosyllable(Nishibayashi, 2014)
                                                  • Familiarization: passage - Test: target / distractor
                                                  • Participants: 20 6-month-olds (Nishibayashi, 2014), 24 7.5-month-olds (Jusczyk, 1995)
                                                  • Stimuli: 책, 컵, 국, 밥
                                                    • 4 monosyllabic words (Jusczyk, 1995) (Nishibayashi, 2014)
                                                    • content words containing stressed syllables, having a well defined onset and offset, contrasting in their vowel qualities, and being used in simple sentences that one might speak to a child. (Jusczyk, 1995)
                                                    • 4 different 6-sentence passages (18.51s - 20.6s) / 15 times in a row of the isolated words (25.84s - 27.13s) (Jusczyk, 1995)
                                                    • 4 different 8-sentence passages, 10 mean number of syllables per sentence / a list of 20 isolated occurrences (20s) (Nishibayashi, 2014)
                                                • (Nishibayashi, 2014) Young French-learning infants can segment monosyllabic words (at 6 and 8 months), syllables embedded in bisyllabic words (at 6 months) but cannot segment bisyllabic words (at 6 months). "passage - word"
                                                • (Jusczyk, 1995) 7.5-month-old American infants can segment monosyllabic words. "word-passage"

                                              책,밥(target) 컵,국(distractor) / 컵,국(target) 책,밥(distractor)
                                              (1) Familiarization - 2 Passage
                                              -한 passage = 6문장 - initial-medial-final 2번 순환, 한 passage 당 20s
                                              -각 단어의 passage 당 familiarization criterion 30s 
                                               => familiarization phase 시간: 2개 passage를 30s씩 = 60s, 1분이상 봐야 함
                                              -각 트라이얼마다 어느 방향 스피커에서 나오는지, 어느 순서로 나오는지 랜덤

                                              (2) Test - 2 Target / 2 Distractor
                                              -한 단어 당 list는 1개, 단어 15번 읽음, 각 20s
                                              -모든 test trial에서 각 단어의 list(총 4개) 나옴 
                                              -2분 familiarization 끝나면 바로 시작
                                              -3개 트라이얼 - 4개 lists (나오는 순서, 나오는 방향 랜덤)
                                               => test phase 시간: trial 3개 x list(20s) 4개 = trial 3개 x lists 4개(80s) = 240s, 최대 4분
                                               (예) trial 1: 책 list -> 국 list -> 밥 list -> 컵 list
                                               (예) trial 2: 국 list -> 밥 list -> 컵 list -> 책 list
                                               (예) trial 3: 밥 list -> 컵 list -> 책 list -> 국 list
                                              -2s 이상 다른 데 보면 끝나고 다음으로 넘어감

                                              실험 예상 시간 약 5분

                                              • Word Teaching Task(Aslin, 1993) (Fernald & Mazzie, 1991)
                                                • (Aslin, 1993)
                                                  • 12개월, lips, wrist, lobe 가르치기
                                                    • novel but actual word
                                                    • onset phoneme that is typically continuously voiced
                                                    • referred to a body part that could be used as a pointing-referent
                                                • (Fernald & Mazzie, 1991)
                                                  • (1) 옷입히기책 tell the story in a comfortably furnished laboratory playroom - 14개월 IDS, ADS
                                                  • (2) 독일산 가전제품 teach how to assemble machines - ADS
                                                • 몇 분 간 가르치게 했는지 없음
                                              새로운 물체 이미지에 이름 붙이기
                                              실험실 부스에서 ppt 화면을 보며 아이에게 가르치도록 하기(IDS)
                                              다른 연구원에게도 똑같은 물체의 이름을 가르치도록 하기(ADS)
                                              -실험실 부스에서 녹음하면 음질이 깨끗하나 어두워서 아기가 잘 있을지?, ppt 화면 보면서 가르치기
                                              -연구실에서 녹음하면 엄마와 아이의 상호작용이 더 일어날 수 있으나 녹음이 덜 깨끗함, 노트북 화면 보면서 또는 연구실에 있는 인형에 이름을 붙여서 가르치기

                                              2018년 4월 28일 토요일

                                              2018년 5월 할일

                                              (1) 엄마들 발화를 새로 녹음, 음향분석 하는 것 보조

                                              (2) 학회행사보조 편집일

                                              (3) 한양대 발표 논문 완성

                                              (4) segmentation study 디자인

                                              2018년 4월 10일 화요일

                                              Manybabies 1 CDS followup instructions

                                              보낸 사람: Melanie Soderstrom [M_Soderstrom@umanitoba.ca] 대신 manybabies1 [manybabies1-bounces@lists.stanford.edu]
                                              보낸 날짜: 2018년 1월 30일 화요일 오전 11:22
                                              받는 사람: manybabies1@lists.stanford.edu
                                              제목: [manybabies1] CDI follow-up study
                                              Dear colleagues,
                                              The ManyBabies 1 CDI follow-up has officially been submitted for peer-review as a registered report with JCL during their pilot phase for this format. You can view the submitted manuscript here.

                                              It is not too late to sign up! We welcome additional laboratories joining the follow-up study. You can see the Instructions here, and sign up here. The minimum commitment is only 10 infants.

                                              Also, note that we would welcome contributions from researchers working with non-North American English learning infants. Kiley Hamlin has kindly offered to help if any laboratories need help setting up a non-American-English CDI survey in online format to ease data collection. Contact hamlinlab@psych.ubc.ca if this is of interest to you.

                                              Please let me know if you have any questions!


                                              Melanie Soderstrom
                                              Associate Head (Graduate)
                                              Associate Professor
                                              Department of Psychology
                                              University of Manitoba

                                              SES 설문조사 작성 문항

                                              http://lefft.xyz/pdf/Hernandez-etal_SRCD_2017_Poster_Final.pdf

                                              https://www.researchgate.net/publication/320835759_Development_of_the_Survey_of_ParentProvider_Expectations_and_Knowledge_SPEAK

                                              2018년 4월 4일 수요일

                                              2018년 4월 할일

                                              • 홈페이지에 한양대 포스터 발표 올리기
                                              • 19일 베이비페어 판촉물 제작
                                                • 볼펜, 휴지, 물티슈 - 홈페이지 주소, QR코드
                                                • 기프티콘 알아보기
                                                • 홈페이지에 연구내용, 설문지 올리기 - 도메인 지정 www.chosunbaby.com
                                                • CDI 설문지는 UBC 13일까지 기다려보기
                                                • SES 설문지 문항 제작이 급함

                                              • HPP 참가자 모집 완료
                                                • 4월 23일 기준, younger 5명, older 2명 필요
                                              • HPP template 맞추기
                                                • participant
                                                • data

                                              • IPLP 코딩
                                              • IPLP habituation 2개로 나뉘는 이유 메일
                                              • IPLP procedure 논문 작성
                                              • IPLP SELSI 점수 기록

                                              • 가구 구매(시계, 볼풀, 안락의자)
                                              • 연구계획변경서 제출, 레나 구매 의뢰서 제출, 27일까지 결과보고서 제출 (고언숙)
                                              • 영어 터치 코딩 (현지)
                                              • 4월 19일(목) ~ 22일(일) 베이비페어 (지효, 현지) + 알바생(?)
                                                • 0~24개월
                                              • 한양대 HisPhonCog 2018 포스터 발표 준비 (현지 참석)

                                              • segmentation 실험 계획 (지효, 현지)
                                                • Segmentation literature

                                                  • HPP 방법론 자체에 관한 페이퍼
                                                  • segmentation 에 관한 고전적인 페이퍼 가운데 하나인  Jusczyk & Aslin 1995 논문과 또 다른 Jusczyk 의 논문 링크
                                                  https://www.dropbox.com/s/5v993mqxkvy88xj/Jusczyk_Aslin_1995.pdf?dl=0
                                                  https://www.dropbox.com/s/t9h39et3t9v9v1s/Jusczyk99.pdf?dl=0
                                                  • 영아기 언어습득 분야의 개척자중 한명인 Peter Jusczyk이 각 연령별로 어떤 종류의 연구를 했는지 모아놓은 링크. 
                                                  http://hincapie.psych.purdue.edu/Jusczyk/Age.html
                                                  • Peter Jusczyk 의 제자이며 우리 연구실과 제스쳐에 관해 협력 연구를 하고 있는 Amanda Seidl 교수의 연구 페이지. segmentation 관련 연구를 2000년대 중후반까지 했음.  몇개의 페이퍼를 다운 받을 수 있음. 
                                                  http://web.ics.purdue.edu/~aseidl/AmandaSeidl.html#Current_Research
                                                  http://web.ics.purdue.edu/~aseidl/Purdue_Infant_Lab/Publications.html
                                                  • 한국어를 배우는 아동을 대상으로 할 수 있는 segmentation 실험의 주제
                                                  (1) 한국어는 영어와 비교할 때 종성이 중성화 되는 특징이 있다. 아기들이 이 중성화된 자음에서 본래의 음소를 알아내서 그 단어를 인식할 수 있는지 테스트해본다.
                                                  예: 잎이 [ipi] 아주 예쁜.... 이라는 문장을 듣고 "잎"[ib] 이라는 단어를 인식해 낼 수 있을까?
                                                  구체적 실험 플랜: 
                                                  1. 먼저 7.5 개월 아기들이 종성의 중성화와 상관 없는 단어 (예: 공)를 segmentation  할 수 있는지 본다. 
                                                  2. 그 다음에 7.5 개월 아기들이 "잎"처럼 중성화와 관계된 단어를 segmentation 할 수 있는지 본다. (아마 힘들것 같음)
                                                  3. 그러면 14개월 아기들이 2와 같은 실험의 자극을 segmentation 할 수 있는지 본다. 
                                                  (2) 한국어는 영어와 달리 강세현상이 분명하지 않다.  (단 전라도 방언은 장단모음이 강세역할을 할 수 있음) 그렇다면 한국어를 배우는 아동들은 영어를 배우는 아동들과는 다른 큐를 이용하여 segmentation 할까? 영어문장을 한국 아동에게 들려주면 여전히 강세를 이용하여 segmentation 할 수 있을까? => 이 주제는 segmentation 실험을 한두개 돌린다음 좀 나중에 연구한다.
                                                                 (1)이나 (2)에 비해 statistical word learning (수업시간에 했던)이 디자인하기가 더 쉽고, 이미 되어 있음. (지효씨 숙제)


                                              • 음소지각 실험 계획
                                              한국어를 배우는 영아들은 언제쯤 ㄱ, ㅋ, ㄲ 가 서로 다른 소리라는 것을 알게될까? (현재 중앙대 실험실에서 진행하고 있거나 이미 마쳤을 것으로 생각함. 그런데 실험 디자인이 개량될 필요가 있음. 또한 방법론도 single screen habituation 으로 알고 있음. 우리는 HPP를 이용하거나 IPLP를 사용하여 실험할 수 있음. 이부분에 대해 논문을 함께 읽으며 공부해서 실험 디자인을 할 수 있음. 이 실험 하면 매우 인기를 끌것으로 예상. 모든 사람들이 궁금해 하는데 아무도 아직 신뢰할 만한 답을 내 놓은 적이 없음.)

                                              • ManyBabies 후속실험
                                              HPP 실험에 참가한 아기들을 18개월, 24개월에 CDI 를 이용하여 언어발달 검사를 실시하는 스터디를 여러 실험실에서 진행하고 있는데 우리도 참여하기로 함. 이미 아기들이 나이가 지나버린 아기들도 있어서 서둘러 CDI 를 웹버젼으로 만들려고 하는데 UBC 실험실에서 도와주기로 해 기다리고 있음. 아기들 나이가 지나고 있어서 우선순위 높음. UBC 실험실에서 어디까지 도와줄수 있을지 모르나, CDI 검사지를 문서화하는 작업이 필요할 듯함. UBC 에서 연락이 오는대로 바로 착수할 준비 필요함.
                                              -----> google document 설문지를 이용하면 이메일, 문자, 카카오톡으로 전송이 가능함. 어머니들이 체크해야 할 낱말이 수백 개 이상이므로 현장에서 곧바로 검사를 시행하는 데에는 무리가 있을 수 있음. 따라서 이미 실험에 참여한 어머니 또는 베이비페어 현장에서 연락처를 받고 편리한 시간에 집중해서 설문에 참여하도록 하는 것을 고려해봄. 다만 google document 설문지로 개별 응답에 대한 결과를 곧바로 내는 것에는 어려움이 있으나, 개별 응답을 확인이 가능하므로 직접 결과를 내고 결과는 마찬가지로 어머니들의 이메일, 문자, 카카오톡으로 전송할 수 있음. "시험점수내기"


                                              • SES와 language outcome 의 상관관계에 관한 실험
                                              위에서 말한 Web Version 언어검사지가 완성되면 Baby Fair 에 가서 현장에서 바로 엄마들에게 가정환경 (SES 를 간접적으로 측정할 수 있는)에 관한 설문과 함께 바로 언어검사를 현장에서 해주면서 데이터를 모을 수 있음. 이것이 가능할지는 잘 모르겠음. 이를 위해 해야 할일은 먼저 설문지를 만드는 것, 그 다음에 이 설문지를 Web version 으로 만드는 것, 그리고 CDI 를 web-version으로 만드는 것인데, 어떻게 보면 금방 할 수 있을 것도 같음.
                                              -----> SES를 간접적으로 측정할 수 있는 설문지 문항 제작하기

                                              시간이 된다면 지효씨가 혹시 google doc 으로 이런 게 가능할지 연구해 봐도 좋겠음. 지효씨가 시간이 안되거나 기술이 부족한데 대신 할 사람을 알고 있다면 알바로 섭외해서 부탁해도 좋음. 구글닥을 만드는 것 자체는 어렵지 않을 것 같은데 결과를 자동으로 저장해서 보여주는 단계가 기술이 좀 필요할 듯함. 



                                              2018년 2월 22일 목요일

                                              2018.02.23 실험실 오디오 비디오 재정비




                                              <IPLP>
                                              5/6 제외한 1/2, 3/4 MUTE

                                              <HPP>
                                              1/2, 3/4 제외한 5/6 MUTE

                                              TERRATEC input & output


                                              • audio stimuli 70dB (referenceaudio.wav)
                                              • ELAN 1st & 2nd cam sync - 박수치기 / 종치기
                                              • GOPRO 앱