Tips for Implementing the GTD System in your Workflow

These are some tips that I use to maintain my productivity in my workflow.

1. Experiment

I think the only reason I do my GTD system well is because I am constantly trying new things.

This is something I learned from Sam Spurlin through his blog posts and podcasts. He would always give himself a little experiment: go purely analog for a month, use all Apple native software, etc.

I very much do the same thing, especially when there’s something about my current workflow that I don’t like. Getting too many newsletters? Google how to reduce newsletters and clutter in your email. I eventually settled on SaneBox. Don’t like your to-do list workflow? I tried an analog agenda and multiple apps before I eventually settled on Todoist, which I’ve used for a few years now.

This also means that what works for me may not work for you. I found my complete opposite in Dr. Raul Pacheco-Vega. I am a purely digital person (okay, I’ve started bringing a small moleskin to take notes in, but I always transfer it to my computer) and he’s a purely analog person (he has this awesome thing called the Everything Notebook that you should check out).

2. Create a system you can take everywhere…

This is one reason why I am a purely digital person: I go nowhere without my phone. It is my brain for dealing with tasks. Need to remember something? Use the quick-add widget and 20 seconds later it’s written down for me to remember later.

If you are more analog, the Everything Notebook mentioned previously would be helpful. I can’t imagine lugging that thing around all the time, though, so perhaps have a small moleskin notebook that could fit in your pocket.

3. … and capture everything

Do. Not. Rely. On. Your. Memory.

Was that clear enough? You will forget something if you try to rely on memory. As soon as a task comes to mind, you are to write it down. Period. This is the only way nothing will slip through the cracks!

4. Chunk it out!

If you find you are procrastinating on an important task, constantly postponing the day you will do it, then chances are the task is too big and daunting. “Work on thesis” is an example of a very poorly written task. I sometimes write these tasks down because it’s something that pops up in my memory that I need to do, but later I need to sit down and chunk it out. By “work on my thesis,” I really mean I need to fix the descriptive statistics in the second paragraph of the results section, expand the discussion section to add in literature discussed in the introduction, and write my limitations section. Those are easily manageable tasks. They have the added benefit of taking a short amount of time, so I could do it in between meetings if I have a 15-minute time period.

Want some more tips? I found this article on the Todoist blog to be absolutely wonderful.

Managing life using the Getting Things Done system

If you want to learn how to maintain productivity, then you need to read the book Getting Things Done by David Allen. This book is seriously a life-changer. I have used this system for the past few years and while I don’t adhere to his system 100%, the majority of his principles are extremely beneficial.

The Getting Things Done system has five steps: capture/collect, process, organize, review, and do. I’m going to discuss each step in turn.

1. Capture/collect

First, you need a place to capture things that comes your way. I have two places where I capture everything: my email and my to-do list. You need a system that can capture things wherever you are. This is why I use a to-do list app (shout out to Todoist) because it is available on all operating systems and is connected to so many other types of software, including Gmail and Slack. If you deal with a lot of papers and other analog data, then it’s recommended you have a physical inbox as well.

This system needs to be able to capture everything for you. This includes emails, tasks, articles, questions that pop up in your mind, bills, and notes. Your brain is not a suitable place to collect these things! Your brain is fallible and you will forget something. Don’t be that person!

2 and 3. Process and Organize

Next, you need to figure out what to do with everything that you collect. When you decide to process tasks (i.e., go through your task list, check your email, clear out your paper inbox), then for each item you go through the following steps. If the task isn’t actionable, meaning you can’t do something with it, then either chuck it, archive it (e.g., save the email into another folder or save the document into it’s appropriate folder), or mark it as a “someday” task. I like to have a “someday” project in my to-do list where I save thoughts and ideas for future projects (like my thesis and dissertation!).

If the project does have an action, then you have one of three options. If the task takes less than two minutes, then do it. I find that the two-minute rule most often applies to emails. When I’m checking emails, if I can respond to that email within two-minutes then I do it right then and there. If I can’t get it done in two minutes, can someone else do it for me? Sometimes someone asks me a question that I do not know the answer to, in which case I forward the email to the person who does know the answer. In all other cases, defer the action. Either add the event to your calendar or add the task to your to-do list.

4. Review

I think this is the area where most people fail the Getting Things Done system. They store their information away and then promptly forget about it because they are not engaging with their system enough. To properly review, you need to get clear, get current, and get creative. To get clear, make sure your system is cleared and ready to go. This means to gather all loose things that need to be collected and process them all. To get current, review your calendar, to-do list, waiting-for list, project lists, and any other checklists you might have. Prioritize tasks for the upcoming week alongside your calendar (i.e., don’t place to write for three hours on days that you have back-to-back meetings from 9-5). Lastly, to get creative, review your someday/maybe lists and see what projects you might be able to start.

I personally do each review chunk in different timelines. I get clear at the end of each day or at the beginning of a long work session. This frees up my mind to really focus on the tasks at hand. I get current on a weekly basis. On Sundays, I set down with my calendar and to-do list and make sure my days are evenly split and I’ve properly prioritized my tasks. I get creative monthly. This means checking my someday/maybe pile and checking the progress on my projects that are low on the totem pole but where my inspiration resides. When the summer hits, I’ll be able to get creative a little more frequently (yay!).

5. Do!

At this point, you should be able to engage with your task list and get things done! With your mind free from all the things you are trying to remember, your inbox down to zero, and your calendar and task list organized and ready to go, you can now focus on the important work at hand. You might need additional work in how to prioritize tasks or break down tasks into manageable, bite-sized chunks, so we’ll cover that topic next!

Comments Requested: College Access Journal Publication

Together with Dr. Nazanin Zargarpour, we were accepted to present the attached paper for the American Education Research Association’s (AERA) 2017 conference. We are very much interested in publishing the following paper and would love to get feedback from interested individuals to help propel the paper forward.

Click here to download the paper: Zargarpour & Wanzer (2017). From college access to success. AERA Paper

Developmental Appropriateness as Cultural Competence in Evaluation

Children and adults differ more than simply age; rather, they differ in culture as well.1 This recognition can be hard for evaluators: as we have all passed through childhood, it is easy to believe we have the same or greater knowledge of children’s culture than they do. Furthermore, our “spatial proximity to children may lead us to believe that we are closer to them than we really are—only different in that (adults claim) children are still growing up (‘developing’) and are often wrong (‘lack understanding’).”2

This points to a need for cultural competence, which the American Evaluation Association (AEA) describes as “critical for the profession and for the greater good of society.”3 Cultural competence practice in evaluation includes:

  • Acknowledging the complexity of cultural identity
  • Recognizing the dynamics of status and power (e.g., the differential power between adults and children)
  • Recognizing and eliminating bias in language
  • Employing culturally (i.e., developmentally) appropriate methods

In particular, culturally competent evaluations require inclusion of cultural expertise on the evaluation team.4 In the case of youth programs, this means inclusion of developmental expertise, which can involve developmental experts (i.e., psychologists, developmental scientists) but evaluators should also strive to include the youth themselves.

A youth participatory approach can reduce the harmful power imbalances between adult evaluators and youth participants,5, are more ethical for youth6, and offer many benefits for children and adolescents, including knowledge about the evaluation process and improvements in self-esteem, decision-making, and problem-solving skills.7

However, a youth participatory approach can vary by a range of levels.8 At the lowest level, participants are simply included as a data source, which can yet more vary by direct (i.e., surveys, interviews) and indirect (i.e., observations, archival data) data collection. Further up the participatory ladder is giving youth input on the evaluation process. The highest level of youth participation is youth actually leading the evaluation, much like they would in a traditional empowerment evaluation.

Inclusion of youth, or at least adult developmental experts, can improve the likelihood of a culturally competent evaluation for the first two bullet points mentioned above. However, evaluators still must make sure the evaluation design and methods are culturally, and therefore developmentally, appropriate. The next post will discuss how evaluators can promote cultural competence across the evaluation process in the context of youth programs.

How Can Evaluation Avoid Lemons?

I recently stumbled across a blog post by Dr. Simine Vazire, an associate professor in psychology at UC Davis, which discussed an economics article by Akerlof, “The market for “lemons”: Quality uncertainty and the market mechanism.” Here’s what he wrote:

In this article, Akerlof employs the used car market to illustrate how a lack of transparency (which he calls “information asymmetry”) destroys markets.  when a seller knows a lot more about a product than buyers do, there is little incentive for the seller to sell good products, because she can pass off shoddy products as good ones, and buyers can’t tell the difference.  the buyer eventually figures out that he can’t tell the difference between good and bad products (“quality uncertainty”), but that the average product is shoddy (because the cars fall apart soon after they’re sold). therefore, buyers come to lose trust in the entire market, refuse to buy any products, and the market falls apart. (Vazire, 2017, “looking under the hood”)

This is much similar to the replication crisis in psychology, and I worry that evaluation may come to many of these same issues. This worry was also expressed by Scriven (2015).1 He writes, “Also depressing was the discovery that the great classic disciplines, although they thought they had a quality control system, in fact the procedure that everyone immediately put forward as performing that function–peer review–turned out to have been hardly ever studied for simple but essential virtues like reliability and validity…” (p. 18).

What peer review system does evaluation have? Scriven put forth meta-evaluation, but the practice is rarely, if ever, done. Scriven says:

This is because our real world practice is largely in the role of consultant, and consultants’ work does not normally undergo peer review. We need to tighten up the trashy way peer review is done in other disciplines and use serious meta-evaluation to fill the gap in our own emerging discipline with respect to that job that we say (and can prove) taht peer review ought to be done in the other disciplines. (p. 19)

Scriven goes on to argue that evaluation has a duty to study how evaluation is conducted in other disciplines, leading evaluation to be the “alpha discipline.” But before we can consider evaluation as the alpha discipline, we have to do a “full analysis of the pragmatics” of evaluation, meaning we need to more clearly define evaluation so that evaluation is considered a key methodology of social science

“that must be mastered in order to do all applied (and some pure) social science. In that way, good evaluation research designs will be the exemplar for much of social science, instead of social science treating personnel or program evaluation as something they can do with their current resources, albeit conceding that there are some specialists in these sub-areas.” (p. 20)

Scriven’s solution of serious meta-evaluation done publicly aligns with the solution promoted by Akerlof: transparency. I further argue that there need to be more serious regulations applied to ensure this transparency, and one way to do this is through professionalization. Unfortunately, professionalization within the American Evaluation Association has met with serious resistance, but work by our colleagues up north (the Canadian Evaluation Society) and work on standards and competencies within AEA are steps forward. I think professionalization will help evaluators more clearly define who evaluators are and what evaluation is so that the field can move forward as the alpha discipline Scriven describes.

 

Past Its Expiration Date: When Longitudinal Research Loses Relevance [GUEST POST]

Jennifer Hamilton is a Dr. Who fan and an applied statistician who loves to talk about evaluation methodology. She is fun at parties. Dr. Hamilton has been working in the field of evaluation since 1996, and has conducted evaluations with federal government programs; state, county, and city agencies and organizations; and foundations and nonprofits.

You can email Jennifer Hamilton at jenniferannehamilton@yahoo.com

My company (the Rockville Institute) was hired to conduct a 5 year evaluation of a national program that supports schools in high-poverty neighborhoods improve the health of students and staff.  Schools monitor their progress using the Center for Disease Control and Prevention’s (CDC) School Health Index.

The program had recently developed an on-line model of support, to supplement their traditional on-site support model, but wasn’t sure if they should take it to scale. They wanted to base their decision on evaluation results. Therefore, we proposed a rigorous randomized study comparing these two different types of support.

The problem was, two years in, the program’s revenue was shrinking and they had to start using the on-line support program, because it was more cost effective. They could not wait for the results of the evaluation to make their decision. In short, the program did not need us anymore.

We knew their decision was made, but we hoped that the study results could still be useful to other programs. We needed to make some changes so that it would be relevant to a broader audience.  We had two groups – less and more intensive support. If we could expand this, by adding a no-support arm, and an even more intensive arm, the results could be relevant to all kinds of programs. So we developed a continuum of support intensity, from no support (new arm), low support (on-line model), moderate support (on-site support) and a new high intensity model of on-site support (new arm).

But where were we going to find these extra schools?   

We knew that schools implementing the program were only a small portion of the universe of schools completing the CDC instrument. The CDC could therefore provide outcome data for matched schools not participating in the program.

 

We also knew that another study of the program was being conducted and was using the same outcome as us. The support was provided in person by program manages with lower caseloads and more time onsite than the moderate support group(M2) from the original design. The other research group could provide outcome data for matched schools receiving a more intensive version of support

But What About the Methodology?

The question is how to add these new groups while retaining the rigor of the original design. While our original schools were randomized into groups, the new schools can only be matched to the randomized pairs. So we are mixing a quasi-experimental design (QED) into an RCT. What does this mean, practically speaking? Well, we have to think about all the possible comparisons.

The original L1/M2 comparison is unchanged and maintains the highest level of internal reliability – because both groups of schools were randomly assigned.  All of the other possible contrasts are still internally reliable, although to a slightly lesser the extent – because they now involved matched schools instead of randomly assigned schools. 

Implications for Evaluators

This study illustrates a common danger of longitudinal designs - they just takes too long in the policy world and programs are typically in flux. But the funder supported efforts to expand the focus beyond the specific program to one that would have broader applicability. This resulted in a hybrid design that still maintained sufficient rigor to respond to broad policy questions. Flexibility in the evaluation can still save an RCT, and this mixed QED-RCT design can help!

What is the developmental context? and why is it important to evaluation?

In a paper presented at the 2009 annual meeting of the American Evaluation Association1, Tiffany Berry, Susan Menkes, and Katherine Bono discussed how evaluators could improve their practice through the developmental context. They argue that evaluators have spent years discussing how the program context (e.g., age of program, accessibility, size of program, timeline, political nature) and evaluation context (e.g., stakeholder involvement, method proclivity, measurement tools, purpose, use of results) affect the practice of evaluation. However, there has been little discussion on how the participants of a program, and particularly the age of participants, also affect the practice of evaluation.2 Thus, they describe what they call the Developmental Context and the three core developmental facets that define the development context.

1. Principles of Development

The first component of the developmental context involves knowledge of principles and theories of development. These principles and theories explain how the environment, the individual, and the interaction between the environment and individual explain development over time. There have been many broadly accepted theories of such development. Two of my personal favorite, and two that emphasize the interaction between the environment and individual, include Bronfenbrenner’s bioecological systems theory (Bronfenbrenner & Morris, 2006) and Lerner’s (2006) developmental systems theory.

Relevance of Principles of Development to Evaluation

People are a product of their individual attributes and the contexts they live in over time. Thus, when conducting evaluations, it is important to examine participant, program, and other contextual characteristics in tandem. A systems perspective to evaluation can be a useful endeavor to achieve this. This type of approach also helps in answering the “For whom does this program work?” question in evaluation.

2. Developmental Domains

Developmental domains refer to cognitive, socioemotional, physical, and other domains of development. For instance, the cognitive domain refers to intellectual or mental development, such as thinking, memory, reasoning, problem-solving, language, and perception. The socioemotional domain refers to relationship skills, social awareness, self-management, self-awareness, and responsible decision-making3. The physical domain refers to development of body structure, including sensory/motor development and coordination of perception and movement.

Relevance of Developmental Domains to Evaluation

Developmental domains primarily seem to affect the appropriate methods for participants. For example, knowledge of the cognitive stage of participants can help evaluators accommodate the reading level ability for construction of a paper survey. Knowledge of the socioemotional stage of participants can determine whether focus groups or interviews would be better suited for participants. Also, knowledge of the physical stage of participants can determine whether computer surveys, which require the use of fine motor skills for using a mouse and keyboard, are appropriate.

3. Age of Participants

The age of participants is perhaps most salient to evaluators. We typically group young participants into a variety of categories (e.g., infants, toddlers, young children, older children, adolescents, teenagers, young adults, youth) but these categories often overlap and are not clearly defined in the literature. For example, are youth comprised of children as well as adolescents?

Relevance of Age of Participants to Evaluation

While age is typically used as a determinant of whether a data collection method is developmentally appropriate, the issue becomes complicated when considering children and adolescents from diverse populations (e.g., low-income, cultural and ethnic minorities, those with mental, emotional, or physical challenges).4 Disadvantaged youth may not be at the same developmental stages as their more advantaged counterparts.5 While age is a simple factor to consider when designing and conducting evaluations, consideration of age alone may not be sufficient to ensure a developmentally appropriate evaluation.

Tools I use to Be Productive—And Maintain My Sanity

I’ve gotten more than a few comments lately on how much I am on-top of things. However, I haven’t always been this way. It’s taken years of hard work after constantly letting things slip through the cracks and not pursuing my goals as rigorously as I could or should. I was tired of it, so I focused my efforts on figuring out the systems and tools that worked for me. Continue reading “Tools I use to Be Productive—And Maintain My Sanity”

Developmentally Appropriate Evaluations

In evaluation, one thing is clear: context matters. Many evaluators have described how the context of the program (e.g., age of program, type of program, feasibility) and the context of the evaluation (e.g., resources, stakeholder involvement, measurement tools) affect evaluation designs, methods, practices, and measures. However, evaluators have only begun to examine how the developmental context also affect how evaluators design and conduct evaluations. Specifically, how should the age of participants affect evaluations?  Continue reading “Developmentally Appropriate Evaluations”

Importance of Measuring Participants’ Reasons for Being in the Program

This blog post was originally posted on AEA365 and was written with Tiffany Berry, a research associate professor at Claremont Graduate University. 

Today we are going to discuss why you should measure participants’ motivation for joining or continuing to attend a program.

Sometimes, randomization in our impact evaluations is not possible. When this happens, there are issues of self-selection bias that can complicate interpretations of results. To help identify and reduce these biases, we have begun to measure why youth initially join programs and why they continue participating. The reason participants’ join a program is a simple yet powerful indicator that can partially account for self-selection biases while also explaining differences in student outcomes. Continue reading “Importance of Measuring Participants’ Reasons for Being in the Program”