generated from DevPsyLab/QuartoWebsite
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathdataManagement.qmd
More file actions
635 lines (426 loc) · 14.3 KB
/
dataManagement.qmd
File metadata and controls
635 lines (426 loc) · 14.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
---
title: "Data Management"
execute:
echo: true
error: true
jupyter: python3
format:
html:
code-fold: false
code-tools:
source: true
toggle: true
---
# Import Modules {#importModules}
The default way to import a module in `Python` is:
```{python}
#| eval: false
import moduleName1
import moduleName2
```
For example:
```{python}
import math
import random
import collections
import numpy as np
import pandas as pd
import matplotlib.pyplot as pp
```
# Import Data {#importData}
Importing data using `pandas` takes syntax of the following form for `.csv` files:
```{python}
#| eval: false
data = pd.read_csv("filepath/filename.csv") # uses the pandas module
```
Below, I import a `.csv` file and save it into an object called `mydata` (you could call this object whatever you want):
```{python}
#| eval: false
mydata = pd.read_csv("https://osf.io/s6wrm/download") # uses the pandas module
```
```{python}
#| include: false
mydata = pd.read_csv("data/titanic.csv") #https://osf.io/s6wrm/download
```
# Save Data {#saveData}
Saving data in Python takes syntax of the following form for `.csv` files:
```{python}
#| eval: false
object.to_csv("filepath/filename.csv", index = False)
```
For example:
```{python}
#| eval: false
mydata.to_csv("mydata.csv", index = False)
```
# Set a Seed {#seed}
Set a seed (any number) to reproduce the results of analyses that involve random number generation.
```{python}
random.seed(52242) # uses the random module
```
# Run a `Python` Script {#runScript}
To run a `Python` script, use the following syntax:
```{python}
#| eval: false
%run "filepath/filename.py"
```
# Render a Quarto (`.qmd`) File {#renderQmd}
To render a Quarto (`.qmd`) file, you would typically use the command line.
Here is the equivalent command in a `Python` cell using the `!` operator to run shell commands:
```{python}
#| eval: false
!quarto render "filepath/filename.qmd"
```
# Variable Names {#varNames}
To look at the names of variables in a dataframe, use the following syntax:
```{python}
list(mydata.columns)
```
# Logical Operators {#logicalOperators}
Logical operators evaluate a condition for each value and yield values of `True` and `False`, corresponding to whether the evaluation for a given value met the condition.
## Is Equal To: `==`
```{python}
mydata['survived'] == 1
```
## Is Not Equal To: `!=`
```{python}
mydata['survived'] != 1
```
### Greater Than: `>`
```{python}
mydata['parch'] > 1
```
## Less Than: `<`
```{python}
mydata['parch'] < 1
```
## Greater Than or Equal To: `>=`
```{python}
mydata['parch'] >= 1
```
## Less Than or Equal To: `<=`
```{python}
mydata['parch'] <= 1
```
## Is in a Value of Another Vector: `isin`
```{python}
anotherVector = [0,1]
mydata['parch'].isin(anotherVector)
```
## Is Not in a Value of Another Vector
In Python, you can use the `~` operator in combination with the `isin` method to check if values are not in another sequence.
```{python}
~mydata['parch'].isin(anotherVector)
```
## Is Missing: `isnull()`
```{python}
mydata['prediction'].isnull()
```
## Is Not Missing: `notnull()`
```{python}
mydata['prediction'].notnull()
```
## And: `&`
```{python}
mydata['prediction'].notnull() & (mydata['parch'] >= 1)
```
## Or: `|`
```{python}
mydata['prediction'].isnull() | (mydata['parch'] >= 1)
```
# Subset {#subset}
To subset a dataframe, you can use the `loc` and `iloc` accessors, or directly access the columns by their names.
```{python}
#| eval: false
dataframe.loc[rowsToKeep, columnsToKeep]
dataframe.iloc[rowIndices, columnIndices]
```
You can subset by using any of the following:
- numeric indices of the rows/columns to keep (or drop)
- names of the rows/columns to keep (or drop)
- boolean arrays corresponding to which rows/columns to keep
## One Variable
To subset one variable, use the following syntax:
```{python}
mydata['age']
```
## Particular Rows of One Variable
To subset one variable, use the following syntax:
```{python}
mydata.loc[mydata['survived'] == 1, 'age']
```
## Particular Columns (Variables)
To subset particular columns/variables, use the following syntax:
```{python}
subsetVars = ["survived", "age", "prediction"]
mydata[subsetVars]
```
Or, to drop columns:
```{python}
dropVars = ["sibsp", "parch"]
mydata.drop(columns = dropVars)
```
## Particular Rows
To subset particular rows, you can use the `iloc` accessor or boolean indexing.
```{python}
subsetRows = [0, 2, 4] # Python uses 0-based indexing
mydata.iloc[subsetRows]
mydata[mydata['survived'] == 1]
```
## Particular Rows and Columns
To subset particular rows and columns, you can use the `iloc` accessor or boolean indexing.
```{python}
subsetRows = [0, 2, 4] # Python uses 0-based indexing
subsetVars = ["survived", "age", "prediction"]
mydata.iloc[subsetRows][subsetVars]
mydata.loc[mydata['survived'] == 1, subsetVars]
```
# View Data {#viewData}
## All Data
To view data in Python, you can simply print the dataframe:
```{python}
print(mydata)
```
Or, if you're using a `Jupyter` notebook, you can just write the name of the dataframe:
```{python}
mydata
```
## First 6 Rows/Elements
To view only the first six rows of a dataframe or elements of a series, use the following syntax:
```{python}
mydata.head()
mydata['age'].head()
```
# Data Characteristics {#dataCharacteristics}
## Data Structure
```{python}
print(mydata.info())
```
## Data Dimensions
Number of rows and columns:
```{python}
print(mydata.shape)
```
## Number of Elements
```{python}
print(len(mydata['age']))
```
## Number of Missing Elements
```{python}
print(mydata['age'].isnull().sum())
```
## Number of Non-Missing Elements
```{python}
print(mydata['age'].notnull().sum())
```
# Create New Variables {#createNewVars}
To create a new variable, you can directly assign a value to a new column in the dataframe.
```{python}
mydata['newVar'] = None
```
Here is an example of creating a new variable:
```{python}
mydata['ID'] = range(1, len(mydata) + 1)
```
# Create a Dataframe {#createDF}
Here is an example of creating a dataframe:
```{python}
mydata2 = pd.DataFrame({ # uses pandas module
'ID': list(range(1, 6)) + list(range(1047, 1052)),
'cat': np.random.choice([0, 1], 10) # uses numpy module
})
mydata2
```
# Recode Variables {#recodeVars}
Here is an example of recoding a variable:
```{python}
mydata.loc[mydata['sex'] == "male", 'oldVar1'] = 0
mydata.loc[mydata['sex'] == "female", 'oldVar1'] = 1
mydata.loc[mydata['sex'] == "male", 'oldVar2'] = 1
mydata.loc[mydata['sex'] == "female", 'oldVar2'] = 0
```
Recode multiple variables:
```{python}
columns_to_recode = ['survived', 'pclass']
for col in columns_to_recode:
mydata[col] = mydata[col].map({1: 'Yes', 0: 'No'})
for col in columns_to_recode:
mydata[col] = mydata[col].map(lambda x: 1 if x in [0, 1] else 2)
```
# Rename Variables {#renameVars}
```{python}
mydata = mydata.rename(columns = {
'oldVar1': 'newVar1',
'oldVar2': 'newVar2'
})
```
Using a dictionary of variable names:
```{python}
#| eval: false
varNamesFrom = ["oldVar1","oldVar2"]
varNamesTo = ["newVar1","newVar2"]
rename_dict = dict(zip(varNamesFrom, varNamesTo))
mydata = mydata.rename(columns = rename_dict)
```
# Convert the Types of Variables {#convertVarTypes}
One variable:
```{python}
mydata['factorVar'] = mydata['sex'].astype('category')
mydata['numericVar'] = mydata['prediction'].astype(float)
mydata['integerVar'] = mydata['parch'].astype(int)
mydata['characterVar'] = mydata['sex'].astype(str)
```
Multiple variables:
```{python}
mydata[['age', 'parch', 'prediction']] = mydata[['age', 'parch', 'prediction']].astype(float)
mydata.loc[:, 'age':'parch'] = mydata.loc[:, 'age':'parch'].astype(float)
# Convert all categorical columns to string
for col in mydata.select_dtypes('category').columns:
mydata[col] = mydata[col].astype(str)
```
# Merging/Joins {#merging}
## Overview
Merging (also called joining) merges two data objects using a shared set of variables called "keys."
The keys are the variable(s) that uniquely identify each row (i.e., they account for the levels of nesting).
In some data objects, the key might be the participant's ID (e.g., `participantID`).
However, some data objects have multiple keys.
For instance, in long form data objects, each participant may have multiple rows corresponding to multiple timepoints.
In this case, the keys are `participantID` and `timepoint`.
If a participant has multiple rows corresponding to timepoints and measures, the keys are `participantID`, `timepoint`, and `measure`.
In general, each row should have a value on each of the keys; there should be no missingness in the keys.
To merge two objects, the keys must be present in both objects.
The keys are used to merge the variables in object 1 (`x`) with the variables in object 2 (`y`).
Different merge types select different rows to merge.
Note: if the two objects include variables with the same name (apart from the keys), Python will not know how you want each to appear in the merged object.
So, it will add a suffix (e.g., `_x`, `_y`) to each common variable to indicate which object (i.e., object `x` or object `y`) the variable came from, where object `x` is the first object—i.e., the object to which object `y` (the second object) is merged.
In general, apart from the keys, you should not include variables with the same name in two objects to be merged.
To prevent this, either remove or rename the shared variable in one of the objects, or include the shared variable as a key.
However, as described above, you should include it as a key ***only*** if it uniquely identifies each row in terms of levels of nesting.
## Data Before Merging
Here are the data in the `mydata` object:
```{python}
print(mydata)
print(mydata.shape)
```
Here are the data in the `mydata2` object:
```{python}
print(mydata2)
print(mydata2.shape)
```
## Types of Joins {#mergeTypes}
### Visual Overview of Join Types
Below is a visual that depicts various types of merges/joins.
Object `x` is the circle labeled as `A`.
Object `y` is the circle labeled as `B`.
The area of overlap in the Venn diagram indicates the rows on the keys that are shared between the two objects (e.g., `participantID` values 1, 2, and 3).
The non-overlapping area indicates the rows on the keys that are unique to each object (e.g., `participantID` values 4, 5, and 6 in Object `x` and values 7, 8, and 9 in Object `y`).
The shaded yellow area indicates which rows (on the keys) are kept in the merged object from each of the two objects, when using each of the merge types.
For instance, a left outer join keeps the shared rows and the rows that are unique to object `x`, but it drops the rows that are unique to object `y`.

Image source: [Predictive Hacks](https://predictivehacks.com/?all-tips=anti-joins-with-pandas) (archived at: <https://perma.cc/WV7U-BS68>)
### Full Outer Join {#fullJoin}
A full outer join includes all rows in $x$ **or** $y$.
It returns columns from $x$ and $y$.
Here is how to merge two data frames using a full outer join (i.e., "full join"):
```{python}
fullJoinData = pd.merge(mydata, mydata2, on = "ID", how = "outer")
print(fullJoinData)
print(fullJoinData.shape)
```
### Left Outer Join {#leftJoin}
A left outer join includes all rows in $x$.
It returns columns from $x$ and $y$.
Here is how to merge two data frames using a left outer join ("left join"):
```{python}
leftJoinData = pd.merge(mydata, mydata2, on = "ID", how = "left")
print(leftJoinData)
print(leftJoinData.shape)
```
### Right Outer Join {#rightJoin}
A right outer join includes all rows in $y$.
It returns columns from $x$ and $y$.
Here is how to merge two data frames using a right outer join ("right join"):
```{python}
rightJoinData = pd.merge(mydata, mydata2, on = "ID", how = "right")
print(rightJoinData)
print(rightJoinData.shape)
```
### Inner Join {#innerJoin}
An inner join includes all rows that are in **both** $x$ **and** $y$.
An inner join will return one row of $x$ for each matching row of $y$, and can duplicate values of records on either side (left or right) if $x$ and $y$ have more than one matching record.
It returns columns from $x$ and $y$.
Here is how to merge two data frames using an inner join:
```{python}
innerJoinData = pd.merge(mydata, mydata2, on = "ID", how = "inner")
print(innerJoinData)
print(innerJoinData.shape)
```
### Cross Join {#crossJoin}
A cross join combines each row in $x$ with each row in $y$.
```{python}
rater = pd.DataFrame({'rater': ["Mother","Father","Teacher"]})
timepoint = pd.DataFrame({'timepoint': range(1, 4)})
crossJoinData = rater.assign(key = 1).merge(timepoint.assign(key = 1), on = 'key').drop('key', axis = 1)
print(crossJoinData)
print(crossJoinData.shape)
```
# Long to Wide {#longToWide}
```{python}
import seaborn as sns
# Load the iris dataset
iris = sns.load_dataset('iris')
# Melt the dataset to a long format
iris_long = iris.melt(
id_vars = 'species',
var_name = 'measurement',
value_name = 'value')
print(iris_long)
# Pivot the dataset to a wide format
iris_wide = iris_long.pivot_table(
index = 'species',
columns = 'measurement',
values = 'value')
print(iris_wide)
```
# Wide to Long {#wideToLong}
Original data:
```{python}
import seaborn as sns
# Load the iris dataset
iris = sns.load_dataset('iris')
print(iris)
```
Data in long form, transformed from wide form using `pandas`:
```{python}
iris_long = iris.melt(
id_vars = 'species',
var_name = 'measurement',
value_name = 'value')
print(iris_long)
```
# Average Ratings Across Coders {#avgAcrossCoders}
Create data with multiple coders:
```{python}
# Create a dataframe with multiple coders
idWaveCoder = pd.DataFrame(np.array(np.meshgrid(np.arange(1, 101), np.arange(1, 4), np.arange(1, 4))).T.reshape(-1,3), columns=['id', 'wave', 'coder'])
# Add positiveAffect and negativeAffect columns with random values
np.random.seed(0)
idWaveCoder['positiveAffect'] = np.random.randn(len(idWaveCoder))
idWaveCoder['negativeAffect'] = np.random.randn(len(idWaveCoder))
# Sort the dataframe
idWaveCoder = idWaveCoder.sort_values(['id', 'wave', 'coder'])
print(idWaveCoder)
```
Average data across coders:
```{python}
# Group by id and wave, then calculate the mean for each group
idWave = idWaveCoder.groupby(['id', 'wave']).mean().reset_index()
# Drop the coder column
idWave = idWave.drop(columns=['coder'])
print(idWave)
```
# Session Info
```{python}
import sys
print(sys.version)
```