lllm >> hi, thanks for joining"learning to love your logic model." the origins of this classare my own exasperation at how much people are being dismissiveof and attacking logic models. and when you get at what they're attacking,almost always it's a caricature of logic models, not logic models as i've come to know them. and as a result, they tend to do nothingor they do something else that ends up being logic model under another name. the purpose of today's webinar is toteach you to love your logic model.
what can you get from a standard logic model? and i think by the end of the webinar,you'll learn that most of what you want to do in planning an evaluation can be done using thisvery simple tool that we're all used to using. key takeaways today -- it's never about themodel, it's about understanding your program. when people get dismissive of logic models, sometimes because of theircomplexity, they often do nothing. i don't care whether you use a logic modelor many of the other graphic techniques. the important thing is you need tounderstand your program if you're going to be successful at planning an evaluation.
a second takeaway that we'll go into ina second is that many of the benefits of this are what we call "process use benefits." even though logic modelsoriginated in evaluation, sometimes the biggest benefitswe bring to the table as evaluators using logicmodels is helping people clarify up front logical gaps in their program. some of those are immediately actionableand never turn into an evaluation question. some of those set up evaluation questions. number three, four, and five are related.
you know lots about your programeven before you draw your model. one of the advantages of logic models is thatwe often know too much about our program. we have business plans, strategicplans, communication plans. and some of them talk past each other. the simple discipline of a logic model is a wayto figure out how do you depict your program, and do you depict your program consistently? what that means is number four, "a littlebit of logic modeling goes a long way." like many things, including somespices, a little bit of this really, really dresses up the dish, and only alittle bit more suddenly makes it overwhelmed
with a certain spice. well, same thing with logic modeling. as a simple technique, you yield a lot of the benefits using thevery, very simplest approaches. and that's how we're going to approach it today. and number five, why is this important? because there's a trade-offbetween accuracy and utility. what a logic model looks like really dependsupon who's using it and for what purpose. when we assume automatically that the purposeof logic models is complete, accurate,
nook and cranny detection and descriptionof our program, then often what we end up with is logic models that are so complexthat it doesn't serve other purposes, like communication or engaging stakeholders. so those are the key takeaways we're going totouch on today, and at the end of the class, we'll look back and see if we've hit them all. the reason we care about logicmodels is that all of us are in a continuous quality improvement werecontinuous program improvement cycle. most large organizations, and cdc is certainlyone of them, do all three of these processes -- planning, performance measurement,and evaluation.
it's not important that thesame person or the same part of the organization do all three of these. they're all very complicated. but what is important for them to feedeach other, the organizations need to have some common frame of reference. and this is really where logic models excel. so even if someone down thehall is doing planning, someone a floor up is doing performancemeasurement, and i'm doing the evaluation, if the organization has a very simple logicmodel at the start, then we can take it
in our individual directions butwe'll take it with the same storyline. which means then the evaluationquestions i'm asking are the ones that come out of the strategic plan. the performance measures andevaluation yield findings that can then feedback to closing that cycle. what do we do and how do we do it? this is really the underlying intent ofthe cdc evaluation framework as well. so the framework is about 15 years old. six steps, starting with engaging stakeholders,but circulating all the way around that circle
to ensuring use and sharing lessons learned. logic models are so important inour framework because there is a way to get the forward momentum to getall the way around that circle. our circle says good monitoring andevaluation isn't just monitoring and evaluation that collects data correctly,analyzes it correctly. good monitoring and evaluation(m&e) is evaluation and monitoring where the findings are used. how does that happen? that happens in the focus step, bymaking sure you answer the questions
that are really most importantfor that situation. both of these are really,really obvious observations. how do we get there? that's where logic models become so important. those key steps, engaging stakeholders and thendescribing the program in sufficient detail to know that everyone has clarity and consensus. you can see how having that clarity andconsensus upfront is the way to make sure that you have a good discussion about whatshould we focus on in this evaluation. and you can see how choosing those rightquestions creates the forward momentum
for your findings to turninto findings that are used. let's start off with the world'ssimplest definition of logic models. logic models are graphic depictions of therelationship between your program's activities and your program's intendedeffects or outcomes or impacts, or whatever results termyou were brought up to use. i bolded those two words because they'rereally essential to the definition, and it's what makes logic modelsdifferent from other things. there are many graphic depictionsof all the activities in my program. what makes logic models different isthat we don't depict just the activities,
but we depict the relationship betweenthe activities and the outcomes. so things like process maps anddecision trees, etc., are really, really helpful for complex programs, butthey get only at the "what" of the program. something becomes a logic model whenit tries to depict the relationship between the "what" and the "so what." and that turns to the word "intended." when we ask most programs to start thinkingabout their "so what," and particularly to take that "so what" downstream, itmakes them very, very uneasy. this definition of logic models reminds us
that what we're depicting isaspirational or intention. it's not the reality, and it's not the fantasy. now, in a class on loving your logicmodel, it's strange to see this slide - that "you don't ever need a logic model,you always need a program description." but it's very, very important. there are many people who areturned off by logic models. they've been brutalized by very dogmaticapproaches or they find them overly complicated. as a result, they do nothing. and what they do in the process is theymiss all sorts of opportunities to learn
about their program in a way that willhelp them with planning an evaluation. so i try to emphasize here thatyou don't need a logic model. you don't need any specific type of graphic. you always need a program descriptionthough, to do good planning and evaluation. what does that mean? well, i think as you go down these five bulletsthat you need to know about your program, you'll conclude by the end that,"boy, i really should have a picture. that's going to be the easiest way to do it." but if you're the type of person that doesn'tlike pictures, tends not to look at maps,
then just remember -- somehow you needto understand all five of these bullets. the big need your program is trying to address. so what's that big lighthouse in the distance? and many programs don't want to think aboutthat because they think the act of putting that in their picture or their description or their logic model meansbeing held accountable for it. we'll see later that's not the truth, and thatin fact there's a million reasons you want to know what that lighthouse in the distance is. that big need you're trying to address,
even though your program aloneisn't going to get there. the second thing is target groups. we need to understand both the target groups, the people who need to takeaction, and outcomes. outcomes are the type ofactions they need to take. if i was doing this for the corporate audience,it would be a very, very short presentation because almost always in a corporateenvironment, i'm going to do activity x, y, and z, and my outcomes are very, very proximal. they happen almost immediately.
sales go up or they don't go up. revenues go up or they don't go up. profits go up or they don't go up. we don't have that luxury inhealth and human services. rather, we do something which has to movea whole bunch of people who are not us to do something, which moves a whole bunch ofother people who are not them to do something, and that's how we make progressin public health. so those first three bullets are essentialto program description of any kind, no matter how you do it, because it helpsus understand what's that causal landscape,
what's that chain of outcomes required to get to that big need we're tryingto contribute to solving. number four and five finallyget to your program. so number four says, now that i understand whatthe "so what" is and who needs to take those "so what" actions so public health canhappen or public health outcomes can happen, what am i bringing to the table as a program. and then finally, that fifth bullet reminds usthat sometimes the biggest process uses come out of not just listing the activitiesin the outcomes or sequencing activities and the outcomes, but trying to depict thesecausal relationships using boxes and arrows,
drawing in something to show the pathwaysbetween what i do as a program and who or what outcomes are going to happen first,what are they going to lever, and how's it going to get me to my public health need? and i put cause in quotes to remind usthat at this point, we're aspirational. we may or may not have a theory. we may have a very strong evidence base. nevertheless, at any point in time, ican say, 'my program description is -- if i do these activities, i thinkactivity a will lead to outcome b, which should drive outcome d. activity b
which should drive outcome b,which should drive outcome g." right? and that's where a lot of these lessonsand processes are going to come from, as we see. now, logic models can lookany way you want them to. and the format that's going to work mostappropriately depends totally on the need to which you're putting the logic model. so here's a cartoon of theworld's worst logic model. i think it would be very, very rare toconclude that this is the logic model you need. in fact, it's logic model like this that haveled to this webinar because it's logic models like this that people who hatelogic models tend to caricature.
on the other hand, this is avery, very simple logic model. it includes all of the we tend to see inlogic models, inputs, activities, outputs, a stream of short-term, intermediate,and long-term outcomes. then underlying the whole thing, context,assumptions, stage of development. and sometimes this simple, simplelogic model is just plenty. we'll see later that even here, you do yourselfsome good by building from the inside out. here's another simple logicmodel that we'll look at later. and you'll notice here, ihaven't used any of those terms -- inputs, outputs, etc. i havedepicted only activities and outcomes
and divided them very simply into early andlater activities, early and later outcomes. and we'll see later that for a lotof the uses we put logic models to, this is going to be just fine. logic models need not even be linear. there was a period of time where it was verypopular to have your logic models in circles, and that still survives in some places. i'm a linear guy. i kind of prefer to see them as boxes andarrows, but if it works better for you and your stakeholders todepict it, that's great.
you can see in this case, this is alogic model for policy development. we start with problem identification. we do our policy analysis. we do strategy and policy development. that leads to a whole bunch of outcomes -- policy enactment, policy implementationwhich gets us to our public health outcomes. again, we did this as a circle,but it works perfectly well to remind us the steps ourprogram has to go through. the second take away is that sometimes thebiggest benefits are process use benefits.
logic models, as i said, grew up in theevaluation field as a tool for evaluators, and often as a tool for identifyingthings to measure. what we learned early on is that the big benefitwe often bring to this process is clarification with the program early on of the logical gapsand what the program is trying to accomplish, or identifying lack of consensuson those things. so "process use" - as a term, often attributedto michael patton, means when the influence of the program improvement comes not from thefindings of an evaluation, good is they're going to be and helpful as they'regoing to be down the road, but from these immediate insightsyou glean during the tasks involved
in doing an evaluation. so the identification of stakeholders,the setting of the evaluation focus -- those are all early tasks that will often leadto clarity and consensus were identified lack of clarity and consensus, and most importantlyof all, the development of the logic model. so let's look at how logicmodels help with process use. as i said before, sometimes weknow too much about our programs. we have a strategic plan. we may have an evaluation plan. we have a set of performance measures.
what logic models do is they hover abovethose many, many different processes and use a standard set ofterms and definitions to dive into your raw material, nomatter where it comes from. so i may have a strategic plan full ofactions, objectives, and goals, right? well, by using these terms, outcomesand activities, what and so what, it can help me unravel what'sgoing on in my program. so here's a really good example. so this is a real program at cdc. it have five goals, and goal 3 i've calledout for the purpose of this exercise.
disseminate information toguide policy, practice, and other actions to improvethe nation's health. now, let's say that you're a grantee andi tell you the good news about a year in. "you know what? by the end of the grant period,if you accomplish goal 3, you're going to get automatically renewed. if you don't accomplish goal 3, you'reautomatically going to get canceled." well, as a grantee, you'regoing to be very excited. "i only have to accomplish onegoal instead of five goals."
but really, i haven't answered the questionof what it means to accomplish goal 3. logic models to the rescue, and we see thatgoal 3, when we apply the typical terminology of logic models -- activities andoutcomes, "what" and "so what" -- we really see that goal 3 is not one thing. it's a play in three acts. disseminate strong the relevant informationleads to changes in policy, practice, and other actions, leads toimproved health outcomes. now, why is this useful? because when i tell you, as your funder,
all you have to do is accomplishgoal 3, i haven't told you anything. i've set up a discussion for you to say, andwhen you say, "i have to accomplish goal 3, do you mean i need to only show that idisseminated strong and relevant information? do i need to show in addition that thatrelevant information led to changes in policy, practice, and other actions? or do i indeed need to show that those changesin policy, practice, and other actions led to some sort of improved health outcome?" you can see exactly how different thedemand on you as a grantee would be. and here, a logic model in very simple fashion,by taking terms like goals and objectives
and unraveling them, become so helpful. one of the things that turns people off aboutlogic models is all these green and white boxes. logic models are full of termsbecause logic models come out of evaluation which is a hybrid discipline. and so, we see a lot of terms and play. sometimes those terms are used interchangeably. sometimes they're used in different ways. some of the terms sound alike -- outputsversus outcomes, inputs versus outputs, mediators versus moderators, as we'll see later.
the reality is that those eightgreen and white boxes really boil down to three fundamentalinsights about your program. what the program does, the "what." who or what will change because of the program. the outcomes are the "so what." and then those last two boxes of inputs andcontexts really provide the same insight, although we look at it in slightlydifferent ways, as we'll see. what is it that the program needsbeyond what it does as a program? what are the assumptions i am makingbeyond what i do as a program that's going
to affect my ability to implement myactivities in a way that will get my outcomes? now, as i said before, simple logicmodels are best, and i always start with a simple model beforegoing into all of these terms. so the good news is, even though iboil this down to three or four boxes, the two boxes i want to start with, the two boxes that matter mostare the sense of activities. what is it the program andits staff actually do? -- what we sometimes callthis sphere of control. and the outcomes -- what arethe results of activities?
who or what will change beyond the program ifthe program does a good job at its activities, or what we call the "so what," orsometimes the sphere of influence. so here's the underlying logic of thecommunities putting prevention to work program. this was a large program funded bystimulus funds, where we gave money to many, many communities in the united states withthe intent that they would move the dial on activities and actions that would makea difference in obesity, smoking rates, morbidity and mortality in the long run. so the money was for two years, butthe logic model looks aspirationally at everything all the waydown to that distal need.
what do we learn from thatvery simple logic model? well, we learned to have a conversation aboutsphere of control versus sphere of influence. where to my activities stop, the things i havecontrol over, and suddenly i'm an outcome land where i'm trying to influence someonewho does not need to do something. the second thing that very simple logicmodel tells us is a sequence of outcomes. what's the order which i expect these outcomesto occur, and do we all agree on the sequence of outcomes, all the way out to that bigpublic health impact we're looking to have? in the model i just showed you, therereally is only one pathway depicted. but in other models wherethere are several pathways,
one of the other things i realize quickly is that there is a mismatchof activities and outcomes. i may find that some outcomes don'thave any activities to drive them. some activities tend to go nowhere. they're not related to any big outcomes. the last two are the mostimportant uses of logic models that we're going to talk about today. that fourth bullet is what wecall the accountable outcome. as i said before, there's a lot of reasonsi want you to understand your program,
all the way down to that distal lighthouse. but the further you get to the lighthouse inyour logic model, the more uneasy people get because they fear the act of drawing it isthe act of being held accountable for it. no, the accountable outcomediscussion is a different discussion that we'll talk a little bit more about later. but the logic model sets up that discussion. how far in this chain of outcomes am iexpected to get in the current project period? am i expected to get ever for this programto be considered worth the investment? so let's look at that, and then we'll look atthat fifth one, how logic models set up a frame
of reference for the rest of your program. so i mentioned cppw before, and imentioned that it was two-year money. i mentioned that the logic model thatwas underlying it was this model, which started off with activities and supportsand progressed all the way out to reductions in obesity, smoking rates,morbidity, and mortality. the difficulty with having that logic modelout there is it keeps our eyes on the prize, but every once in a while you have a stakeholderwho thinks that the current effort is going to get us all the way over to the right. well, this happened on occasionas it does with all programs.
and of course, we could call people backto the logic model and say, "remember, this is the long-term journey of cppw. our question at the moment is, what canwe expect to accomplish in two years? well, in two years, we decided that ifthings went well, we should be able to see that the grantee communities were able tomake changes in the systems and environments, and that policies change because of thingsthat they may or may not have been related to without a policy environment thatwas supportive was created as well. now, you can see that solidlyon the outcome side, it's just not the outcome called reductions inobesity, smoking rates, morbidity and mortality.
but we had an underlying logic and somemodeling and some forecasting that showed that if we can make these changes in two years,those changes within channel behavior changes in the longer run, and those behaviorsin turn would channel the reductions on the right-hand side that we were looking for. so the logic model reminds us and remindsstakeholders who are skeptical, yes, we are in the reductions in obesity, smokingrates, morbidity and mortality business. at year two, we're not failing. we're making progress in the right directionif, in fact, we get to a world where systems, environments, and policies are more supportive.
the next insight we get from a simple logicmodel besides these obvious ones is sphere of control, sequencing of outcomes, mismatchof activities, and now the accountable outcome, is at the logic model sets up a frameof reference for more detailed models. when i work with an organization, i alwayssay, "look, if you can only do one model, start off with a big, bold,overall model for the entire effort. even if you can have multiple models,start at the top and work your way down instead of inducting from the bottom up. what starting at the topdoes is it allows us to lay out in big bold strokes withthe purpose the program is.
i can then dispatch everyone who'sin any part of the program and say, "now, you do your own logic model." but each logic model is built with referenceto the levels above and levels below. and why would i know that? because i would see -- as iprogressed to these models -- i'd see that that right-hand side of eachmodel is something that i saw in the big model. i do see that on the left-handside of every model, i'd see one or two clusters that i saw in the big model. so having laid out all of theseopportunities for logic models
in all these different formats,how do we get there? so let's talk a little bitabout activities and outcomes and how to create a simplelogic model from them. there are three big ways to constructlogic models, and which one's going to work for you depends totally on the purposefor which you're drawing the logic model. i would say, in my work at cdcwhich spans now a couple decades, almost all of what i do is method 1. almost always, i know somethingabout the program. the program may exist.
it may just be in the planning stages,but i know enough about the program, even if it comes from a mission or a visionor a business plan or a communication plan, such that i can look at this material and say, "what sounds like what theprogram is going to do, the "what." what sounds like who or what is supposed tochange because of the program, the "so what." there are other occasionsthough where number two and number three are little bit more helpful. so number two says, "i'mreally in a formative phase. i know the destination i'm trying to get to.
i'm really not sure how to get there." well, then the purpose of the logic model is tostart on the right with the destination in mind, and you keep backing up andsaying, "well, how do i get there? how do i get there? how do i get there?" this sounds like it's going to be areally, really simple process, but in fact, it becomes very, very complex, very, very messy. but if i then impose those two lenses onthat mess, what part of what i just laid out here is something i'mgoing to do as a program?
what part of this the way in whichpeople who are not me are going to change so that public health can happen? then i'm back to the same kind of logicmodeling, raw material that i am in method 1. method 3 works best when you've got smallprograms that don't see themselves as part of the larger picture but need to. so i work a lot with community-basedorganizations, and sometimes i'll do pro bono consulting. a small community organization will alwayshave a very, very good sense of its "what," often in excruciating detail, but they'llhave very little sense of "so what."
they'll have one or two very stubby outcomes. well, for reasons of strategicplanning, performance measurement, etc., you really want a better sense of that so what. and so there, the action is to move themout from their outcomes and say, "so what? then what happens? so what? then what happens?" so what, then what happens?" until we get closer to a thing that looks likethe big distal lighthouse in the distance. what i do then is i have my twocolumns, and sometimes the easiest way
to create a logic model is just to say, "withoutadding anything, what if i took that column of activities and divided it into two? are there ways to sequences activitiesbased on the logical occurrence? do some of these activities have to happenbefore other activities can happen?" sometimes yes, sometimes no. even very, very complicated programsoften will have activities that rollout over time but are not logically connected. they are logically independent. on the outcome side, one really, really bigbenefit of logic models, as we said before,
is the sequencing of these outcomes. i may start off with one columnof outcomes, 10 outcomes in there. it is invariably the case that if i give myselftwo columns or three columns to play with, that i can answer the question and figure outif everyone answers the question the same way, which of these outcomes is going to happenfirst, thus levering the later outcomes, thus levering those last outcomeswhich look very much like our need. so i'm going to walk through a very,very simple case that we're going to lose for illustration purposes. and for people who've taken classes with me atcdc, this is a case i've used over the years.
i've doctored it up to make teaching points. one point way in the history, itrepresented how we did lead poisoning. these days it does not, since we havemore of a primary prevention focus. but i retain this case because i'vebeen able to embed some minefields in it that help the teaching ofthese logic model points. so it's kind of read through this together. "county x, with a high number of lead-poisonedchildren, has received money from cdc to support its childhood leadpoisoning prevention program." all right, that's obviously just preamble.
the rest of this, i want you to payattention as we need that what sounds like that big lighthouse in the distance need,that sounds like what the program is going to do, its activities or its "what," that sounds like the outcomes is tryingto achieve or to influence. the "so what," who or what is going to change if the program does a goodjob at its activities, right? "the program aims to do outreach and identifychildren to screen, screen and identify those with elevated blood lead levels, assesstheir environments for sources of lead, and case manage both their medical treatmentand correction of their environment.
they will also train familiesof elevated blood lead children in selected housekeepingand nutritional practices. while as a grantee they can assuremedical treatment and reduction of lead in the home environment, the grantcannot directly pay for medical care or for renovation of those homes." so a simple case. i've laid it out in one slide. obviously, it's more complex than that, butthis slide is complex enough for me to extract from that, what does my twocolumn table look like?
what does this program do? who or what is this program trying to change? if i had you in class, youwould do this yourself, but i bet we'd land in about the same place. what does the program do? it does outreach. he uses that outreach tofigure out where to screen. it does some case management. notice i haven't listed these inthe order in which they occur.
i've listed these just in a free-floatingorder, which is fine right now. they do referral for medical treatment. they identify the kids with elevated lead. they do environmental assessment of the home. they refer to hiring forenvironmental cleanup if it needs it. and they do family training. on the outcome side, i wastrying to depict boldly who or what are we trying to move that's not us. and there are three who's orwhat's we're trying to move.
we need to change families. we need them to adopt the in-hometechniques our training teaches them. and we don't own the families. the families are independent actors. providers are definitely independent actors. they are not part of our program. yet, we need them to treat the ebll kids that wefind, and/or to refer the kids that they find. the landlord or the housing authority,they're really so important to us because that's how we eliminate the lead source,because we can only refer for medical treatment.
we can only refer for environmental cleanup. and then there's italicized ones at the bottom. those look pretty much likemy lighthouse in the distance. get that ebll down, which is what the literaturesays is going to stop the developmental slide, presumably improving the quality of life. now, i learned a few thingsfrom even this two-column table, but i don't want to go into those here. i want to show instead how much we learnwhen we simply expand this to four columns. so here's that same two-columntable expanded into four columns.
and all i've done is i've askedmyself, in that list of activities, which ones fundamentally need to happenfirst to drive the later activities? on the outcome side, you can see on thatright inside, this looks pretty much like the downstream need, thedistal lighthouse in the distance. so that early outcomes column is, whatare the things i expect to see earlier in the project period that are going todrive me down to that right-hand column? now, as i said before, sometimes there's alot of process use insights that come even from those two columns infrom those four columns. but the real process use insights, and whenwe do strategic planning using logic models,
i almost always drive to this nextapproach, is to take that for column table, start drawing in some boxes and arrows to lay out what some people wouldcall the theory of change. some people would just call a logic model. some people would call a flowchart format logic model. but the important thing is,no matter what you call it, to remember that the arrows you're puttingin there, which i'm calling "causal arrows," are not caused in the sense of scientifictheory, they're caused in the sense of "what's the underlying logic of the program?"
based on what we know today, what dowe think that underlying logic is? the second thing to remember isthis is not a different logic model. it's just the same elementsin a different format. so if the four-column tableis the mapquest directions and narrative, then this is the mapquest map. the arrows are therefore going to gofrom activities to other activities. some are going to go fromactivities to outcomes. and some are going to go from those earlyaffects or outcomes to later outcomes. so again, remember we're going tosee all kinds of arrows in here.
they're going to play different roles. so here's our four-column table,reformatted as what i call a causal roadmap, but that's just a term that i made up. so you can see here thati haven't added anything. i might have changed a few of theterms so that they fit into the box. you can see the column one looks very muchlike column one in the four-column table. we do outreach which leads toscreening, which ids the kids. we then put the kids with elevatedblood lead into case management. that leads to three pathways thatproceed to the east that get me
over to reducing elevated blood leads, whichthen leads to what the literature shows. not necessarily improved development, butat least stopping the developmental slide and more productive and/or quality lives. what do i gain from having done this? well, i gain a million differentthings, but let me point out two or three key process use insights. so let's say that i told you i was going tomake a bet with you, and that bet is if you get to reducing those eblls, so thatsecond box in on the main pathway -- if you get there at the end of threeyears, i'm going to double your grant.
if you don't get there, i'mgoing to dock your grant. if i frame that be and then i tellyou, "looking at this logic model, if that logic model is a correct depiction ofthe program, why would you not take on the bet?" i bet you dough, your eye will beimmediately drawn to two or three things. and these two or three things are notobservations you might necessarily get from that four-column table. so why should i not take on this bet? and again, you may choose to take on the bet. but i'm going to guess that if you are arisk-averse person and you know this program
and you think that this is an accurate depictionof the program, a couple things are going to scare you from hitching yourwagon to that reducing ebll star. the first one is, look how many of theoutcomes or how few of the outcomes are from things you have direct control over. you're dependent upon the kindness ofstrangers for all three of those pathways. i can do an environmental assessment. that doesn't guarantee someone'sgoing to clean up the environment. i can train families. that doesn't guarantee they'regoing to perform the techniques.
i can refer for medical treatment. it doesn't guarantee that the person i referto actually conducts medical management. so the first thing i worry about isthat there's quite a bit of distance from my accountable outcome,and that's easy in this case. it's where i'm going to cash in, getthose blood leads down and keep them down. there's quite a bit distance between wheremy control ends and that accountable outcome. a second insight i get fromthis model that i might not get from the four columns is,look to the left of that box. it says reducing ebbls.
you'll see an arrow there, and that arrow showsthree pathways collapsing into one pathway. now, i've been very, very vague aboutwhat that one arrow means at this point. but it could mean that every one of these pathways is an independentway of getting the blood lead down. if that were the case, i might take on the bet. i mean, if i have three ways upthe mountain, they're not all going to be washed out at the same time. i might be able to get there. or it could mean that those threepathways all have to occur to get
that blood lead down and keep it down. and if you know anything about leadpoisoning, it's much closer to the latter. so a second reason i don'twant to take on this bet -- and this is something i mightnot see in my four columns, is -- holy mackerel, for me to get theblood leads down and keep it down, i have to depend upon the kindnessof three sets of strangers. and those three sets of strangers need tohead east and they all need to head east at about the same pace so thatthe blood lead can be reduced, and i can keep that blood lead reduction down.
the final thing that i might learnfrom this that i wouldn't learn from the four-column table is,if i were to ask you "what looks like the hardest job in this organization?" well, if you're not used to doing leadpoisoning or this type of program, you may think all of these things are hard andcertainly they all come with their challenges. but look over in column one. look at poor case management. the first three activities proceedingcase management just pancake down. nothing heads us east towards outcomesuntil we get to case management.
and then that poor case manager is responsiblefor these three very different pathways. now, what does this tell us? it tells us if you things that arehelpful immediately for process use, and a couple of things that are very helpfulfor us as evaluators creating measures. in the process you sense, doing this insteadof the four-column table immediately tells us, "look, i'm worried about this programbecause it's just getting implemented." as the implementer or the designer or theplanner, i'm going to worry about things. i'm going to wake up in the middle ofthe night thinking about this program. but i know really where to direct myanxiety, thanks to this logic model
and thanks to this bet discussion. i know that of all the things i'm going to worryabout, the things that are most going to put me in a ditch are the failure to have thosehandoffs between column two and column three, the failure of those handoffs to lead toconcurrent and active and effective performance by my three classes of partners --people cleaning up the lead source, the families performing the techniques,and the doctors doing medical management. and thirdly, it's going to depend upon how goodi am at finding these superhuman case managers. so those are immediate process use insights. they say, if i want to make sure this programgets out of the barn and looks strong and ready
to survive, i have to address thosethree things and address them now. as evaluators, we're looking at wherethe program planner feels queasy as we go through this exercise, and we'renoting to ourselves immediately, we need to be paying attention tomeasurement of that above all else. so what are the things i'm going to measure? i'm going to measure -- did, in fact, those handoffs between columntwo and column three happen? did, in fact, the concurrencyof those three pathways happen so that the blood lead wentdown and stayed down?
and did, in fact, the program find thesuperhuman case managers who can take and cover this span of environmental assessment, training families, referringfor medical treatment? let me go through one more quickexample, which will feed into some of the elaboration we're goingto do a little bit later. so those of you who have been at cdc orare familiar with cdc for a long time, may remember the office of workforce and careerdevelopment, owcd, where the training was housed in a whole bunch of other activities. one of my many jobs at cdc, orone of my many places that i work
from our evaluation at cdc,owcd was one of them. and i came in just as we weredeveloping a strategic plan. and this was the mission statementthat we were starting with. "to improve health outcomes bydeveloping a competent, sustainable and diverse public health workforcethrough evidence-based training, career and leadership development,and strategic workforce planning." now, let's read that again, but again,looking through the lens of a logic model. what in here sounds like the need? what in here sounds like what theprogram itself is going to do?
and what in here sounds like the "so what"? who or what, that's not theprogram, is going to change? now, if i had you in class, we wouldsit down and do this as an exercise. and i bet you, this is where we land. what's the need? improving health outcomes. what are the activities? conducting training, doing career leadershipdevelopment, doing strategic workforce planning. what does that lead to?
what's the big "so what"? a whole bunch of people called the publichealth workforce become more competent, more sustainable, and more diverse. they may become better in a million other waysas well, but the mission statement says, gosh, the big fish we're trying to fryis to improve their competency, their sustainability, and their diversity. now, one little wrinkle here,this word "evidence base." so if you read that mission statement,you assumed that -- it's only 30 words. every word needs to mean something.
then when i came to the word "evidencebase," i really had two choices. most of you probably assume that theevidence base was embedded in the training, the leadership development orthe strategic workforce planning. meaning that evidence base exists, and i'mgoing to make sure i draw on that evidence base when i try to conduct my activities. and that's why i'm going to get to acompetent, sustainable, and diverse workforce. conversely, the question would be, "but gosh,what if that evidence base doesn't exist?" at this point, what i want toask myself is, "does it matter?" so i'm in the strategic planning business.
i have this mission statement. i just went through thissimple logic modeling exercise. i come up with this questionabout evidence base. does it matter or not? well, the question is, do i need to know as anorganization if that evidence base exists ahead of time or if that evidence baseis something i have to create? and the answer is yes. i absolutely, positively have to know that. that's a big process use insight.
if that evidence base is something i have tocreate, that evidence base moves from a red box under inputs over to another green outcomebox called "really strong evidence base," which emanates from a whole bunch of activitiesthat are not currently in the logic model, which requires staff and requires money. so this life in three boxes simple logicmodel helps to identify up front a really, really important question ineed to resolve for my program. so let me conclude this sectionby saying-- simple is best. always start with the simple logic model. sometimes activities andoutcomes and the sequencing
of the activities and outcomes is just plenty. nevertheless, we deal with all theseother terms, and they do exist. and so i want to go through, why do evaluatorsand evaluation logic models have so many terms, and when should you use those terms, and howcan you use those terms to maximum advantage? so the terms we're going to look at are the onesthat i deal with most commonly in my own work. mediators and moderators, outputs and inputs. but always remember, as we go throughthis explanation, form follows function. not all models need to haveor have all of these terms. and even when you use these terms, there'sways to use them that will benefit you,
and ways to use them that just meansyou're going to create logic model fatigue for the people you're working with. so let's start off with mediators. mediator is a very confusing term becauseit sounds a lot like intermediate outcomes. mediator just means stuff thatcomes between other stuff. so here's a famous cartoonyou've probably all seen that depicts the conceptof mediators or mediation. famous scientists, step one. step two, a miracle occurs.
step three, step four, result,etc. and the caption is, "i think you should be moreexplicit here in step two." well, that's really what mediators are about. they're not about the thingsthat are miraculous, meaning we need a divine intervention toaccomplish our public health outcomes. it means the place where we have a gap inlogic, we're not really clear on how we get from step one to step twoto step three to step four. and this is where mediators help us. and sometimes we need them,and sometimes we don't.
sometimes our logic models are very explicit and they don't really requireany mediator elaboration. other times, they're not. celeste go back to the examplewe just left, with owcd. so here's the very, very simpleimplicit logic model that we came up with from their 30-word mission statement. do they need a mediator? they don't need a mediator necessarilyif one of two conditions is present. condition one says, "i don't need amediator if it turns out i can show,
and show pretty significantly, that myactivities lead to that distal outcome." so if i can show, as owcd, that thetraining, leadership development and workforce planning i do leadsto improved health outcomes, well, i don't need a whole bunch of mediators. who cares how i get there? god bless me for doing that good work. i'm going to guess that, in this case, everybodyin owcd was very, very restive and uneasy about putting improved health outcomesin there, because, gosh darn it, improved health outcomes comesfrom a million different things.
we're just one piece of the puzzle. our ship is so small and that sea is so large. not necessarily. they still don't need a mediator if itturns out the thing they can achieve, that accountable outcome that theyfeel most strongly and confident about, is one that people would buyas a good in its own right. so owcd doesn't need a mediatorif, when they tell people that what they produce is a competent,sustainable, and diverse workforce, everyone claps them on the back and says,"man, thank you for a job well done.
a grateful nation thanks you for your service." now, a decade ago, i would saythey didn't need a mediator. people had much more faith in government. they had much more faith in theefficacy of government expenditures. they had much more faith in theefficiency of government efforts. such that in owcd elevator speech,if i said, "i do all this good stuff and as a result the public healthworkforce is more competent, sustainable, and diverse," end of story. i'm doing good work.
i think in the ensuing decade,that's all changed, and i don't think that'sgoing to buy you much more. so owcd needs a mediator. why? they cannot prove that their activitieslead directly to an improved health outcome, and the outcome that they can show withnear certainty, confidence, sustainable, and diverse workforce, doesn't sell the programto the people that matter-the people who care about it, like congress the public,etc. so how do we get to that mediator? remember, mediators means thingsthat come between other things. in this case, that mediator space is obviouslygoing to be some outcome yet to be named
between that proximal outcome, the workforceis more competent, sustainable, and diverse, and that distal outcome,improved health outcomes. so what i'm looking for in my mediator is,okay, what's so gosh darn good about competency? what's the "so what" of a competent workforce? what's going to come froma competent workforce that, while it's not improving healthoutcomes itself, it's clearly going to be a much more importantdriver of health outcomes? what's so good about having a sustainableworkforce, one that isn't constantly marred with attrition and/or has a deep bench?
why is that so good? not because it improves healthoutcomes, but it drives something that gets us much closer toimproving health outcomes. same thing with diversity. why is a diverse workforce so muchbetter than a homogenous workforce? what's the "so what" of that? now, there's good literatureon a lot of these things, and at this point you wouldobviously draw on that literature. but the reality is, sometimes you're operatingjust from practiced wisdom and even there,
this is still going to be a helpful exercisefor you framing, what is it that's going to come from that outcome i've listed? what's that next set of outcomes thati want to measure, i want to sell? because that's what's going to persuadeskeptics- this is worth investing in. so if i had this in class, we'dspend some time doing this. but i do bet you, this is where we'd land. if my workforce is more competent,programs are going to be more effective. people are going to know what programs work andthey're going to deliver them in the right way. that's not improving health outcomes, butif i tell someone in an elevator speech
that i produced a competentworkforce, they may yawn. if i say, "because my workproduces a competent workforce, you can guarantee that those programs deliveredat front line are the most effective programs, programs we know will be effective." that's going to buy me more. what's so good about sustainability? who cares? it could be that people hate sustainability. it just means that you're paying allof these public health workers forever
and ever to sit around doing nothing. well again, if the course of sustainabilityis continuity in relationships and approach, i know over the years which people in townare most important to reach this population. i know over the years which people intown are most effective for funding. i don't waste a whole lot of timetraining people and retraining people because i'm constantly dealing with churn. well, that's not the same as improving healthoutcomes but a person would plausibly say, "ah, if that's what sustainability means andproduces, that's something worth buying." then finally, diversity --i mean diversity above
and beyond anything else isprobably cultural competency. and there's a huge literature showing thatif i want clients to access and adhere to the program, if i want clients totrust the information i'm giving them, if i want clients to trust me enough they'lleven come in to disclose their problem, then that's going to be a goodbenefit of a diverse workforce. that's not the same as improvinghealth outcomes, but obviously, a plausible person is going to say,"oh, my gosh, the client's coming in. they're actually accessing the program andthey're adhering to the recommendations. i have a good shot at gettinghealth outcomes improved."
so this is the advantage of mediators. remember, before when we talked about cppw. cppw was a great logic model. it took us all the way out to that eye on theprize, reductions in obesity, smoking rates, morbidity, and mortality, because we wereinvesting a ton of money in several communities that would then implementcertain activities and supports. the trouble with leaving this logic model whereit is, is it leads people to erroneously believe that the program is not successfulunless we see those reductions. and again, cppw, as part of thestimulus, was really intended
for an investment of about two years. so here, if this is what we started with,then everything in that middle is a mediator. and that whole logic model, all those boxes wesaw before, if they didn't exist ahead of time, would need to be created tohelp explain to other people. our eye is on the prize, but what do weexpect to happen between the implementation of activities and supports and thatreduction in morbidity and mortality. and you can remember from what we said before,we expect activities and supports we've chosen to drive these communities in a way wherethey'll see accomplishments in their systems and environments in the policiesthat underlie them.
those are changes that the literature shows and forecasting shows will channelimportant protective behaviors that lead to these reductions. so in that case, the mediatorsaved us from the day when someone says, "how comepeople aren't thinner? how come people aren't smoking less?" you say, "look, because we're only in yeartwo, and this is a model of environmental and system change to drive behavior. and you can see how those changesrelate to this ultimate goal,
and that's where we are at year two." so at this point, what we've elaboratedis the outcome side of the logic model. and sometimes we need to and sometimes we don't. sometimes we can get by with avery high level list of outcomes. sometimes we need a much morecomplex or detailed list of outcomes. once we've got that outcome chain laid out inmore specificity, then the question turns to, how do we do our activities in a way to achievethose outcomes and especially those outcomes that we're calling accountable outcomes, the ones that we're responsiblefor in a certain project period?
we're responsible for itat a certain point in time. this is where the second term we want toelaborate comes in, and that term is "outputs." now, outputs is the most confusingterm in the field of evaluation because it sounds almost like outcomes. and for people who don't confuse it withoutcomes, they confuse it with inputs. i tended not to use outputs in my own work, and often i don't disclose themeven still in my logic models. but i've been persuaded that the outputdiscussion, what outputs bring to the table as a discussion, is incredibly valuablefor logic models and for programs,
whether you're thinking about evaluation oryou're thinking about strategic planning. so let me show you how outputs helpand how to use them in the right way so that you'll derive themaximum benefit from them. so here's a logic model foralmost any screening program. this looks very much likeour lead screening program. outreach leads to screening, leads toidentifying people with the condition. i've only depicted a couple pathways here. one is i can train people withthe condition and self-management. one is i can refer them for medical treatment.
the self-management will leadto a protective behavior change. the medical treatment will lead tomedical management of their problem. both of which contribute toimproved health outcomes. so this is fine, very straightforward. if i was looking for outputs, thetraditional outputs that i usually see in one of the reasons i did notcare for outputs, is this. yes all the big activities --screening, training, and referrals, repeated in the output columnwith a number sign in front of it. now, there's nothing wrong with that.
if i give you a bunch of money, i want youto understand, you have to do nonzero numbers of screenings, nonzero numbers oftrainings, nonzero numbers of referrals. but in terms of the logic model depictinganything that's useful for driving me as a program, either in a processuse sense or in a measurement sense, that output column is pretty useless ifall it's going to do is count things. so my friends that love outputs and arevery persuaded at the utility of outputs for helping programs get better say, "no,no, you're looking at things incorrectly. what the logic model helps you do is tounderstand how you need to do an activity." and why? because the logic model makes clear,what's the poster result from that activity?
so whereas before i was counting the number ofscreens i did, the number of trainings i did, the number of referrals i did, theplot thickens when i use logic models as a way to elaborate my outputs. so it's not just the number of screenings i did. it's i want to know the attributes ofscreening that will make that screening so good, it will lead to identifyingpeople with the condition. in the logic model, is very clear. i'm in the screening business not toscreen as many people as possible, but to identify people with the condition.
i'm not training in self-managementas many people as possible. i'm doing that training in a way that itwill lead to the protective behavior change that i want people to adopt and sustain. i'm not referring for medical treatment for thesake of issuing as many referrals as possible. i want to refer in such a way thatevery referral, sure as shootin', is going to lead to someone entering andcompleting quality medical management. you can see how more helpfulthis is going to be to a program, both prospectively and retrospectively. as people planning and implementinga program, what i want to learn
from this is potential sticker shock. how was i going to screen? well, now that i know that screening isn'tits own thing, it's a thing to identify people with the condition, i may realizethat the screening i'm doing is going to be exactly the wrong kind ofscreening, either in the wrong place with the wrong level of intensity or whatever. the same thing obviously withtraining, there's a huge literature on how we actually get results from training. and it could be that the way i was going totrain might have been handing out a manual.
but this is a very, verycomplicated behavior change. now that i realize i need to be in atraining business to get behavior change, i may have to change my whole training. the same thing with referral. how i refer may change when i realize, "holymackerel, my goal is to get a whole bunch of people with complicated lives referred ina way that they'll actually get and retain and complete quality medical management." so those are all process use insights,things the program can do right away. retrospectively, as the evaluator, i'mlistening to this and i'm saying, "you know,
i've just come up with my process measures." outputs, the output discussion, itsmajor benefit for us as evaluators is to help us create good process measures. process measures down the road aregoing to those measures that tell us, did the program get implementedlike it should have? did it get implemented accordingto its gold standard? what's the gold standard for a program? it's a program implemented so well, doggone it,it's going to achieve its outcomes or it's going to achieve the next thing the logic modelsays that activity is supposed to lead to.
that's exactly the discussionwe had on the output side. and here we go. so if i had you in class, we spenda lot of time discussing this. but here's where we'd land. and notice that i've retainedthis idea of counting things. so it's fine to count howmany kids did i screen, how many clients did i train,how many referrals did i make. but you can see in the parenthesesthat i've tried to call out the things that are most essential to that screening.
so in the case of lead, it's great toknow i've screened a thousand kids. but what's really good to help me get toidentification of kids with lead poisoning is that i do those screenings in the areasof town that meet a likely risk profile. i might give money to 10 different cities. they may all say they screened a thousand kids. i may find out down the road that, ofthose thousand kids in cities 1 through 5, they found 10 kids in cities 6 to 10. they found 500 kids. well almost always, that's whati want to capture in the output.
how can i predetermine by looking athow people are doing their screening that these folks are doing itin a way that's going to lead to identification of kids with lead poisoning? these people are really wasting theirtime and their wasting their money. on training, sure, it's great totrain as many clients as possible. but using the culturally competent curriculum and appropriate supports is probablywhat's going to distinguish people who just report they trained 200 people frompeople who report that they trained 200 people and six months later those folkswere maintaining the behaviors
that they learned in the training. referrals, sure, it's great torefer as many people as possible. but will that referral lead to completionand entry into medical treatment? only if i'm referring in advance to qualifiedor willing medical treatment providers. now sometimes this isn't an issue. but with something like leadpoisoning, it really is. not everyone knows how to do it, and a lot ofpeople who know how to do it don't want to take on still one more poor, uninsured,or medicaid kid. so outputs lives in thisland between the activities
and supports and that first set of outcomes. so i'm not asking myself now in cppw, "whatactivities and supports should i be undertaking that will lead to reductionin morbidity and mortality?" that may be a 5-year, 7-year, or 10-yearjourney, and i may have forecasting and modeling to determine when it's good tohappen and how it's going to happen. rather, the question the outputs have to askis, "now that i know what's supposed to result in the short term especially, the shortand midterm, even a little bit later, how do i do my activities and supports ina way that's going to make that happen?" and it could be i'm on theright track and it's no problem.
it could be i have massive changesto make in the way i approach things. the final thing i want to talkabout is inputs and moderators. remember, i said at the beginningthat there are two other elements and insights we get about our program. and those elements and insights are, what arewe depending upon from the larger environment? we call those inputs andmoderators, but the role they play in our discussion, as we'llsee, is pretty similar. what's going on in that largerenvironment outside my program about which i making assumptions?
and if those assumptions are wrong, it'sreally dumb to continue with the program. now, it's very easy to fall into the trap oflisting every input and every single moderator. we'll see in a second that what we'rereally looking for is cutting to the chase for what we call "killerassumptions," assumptions we're making that if they're not true, it's verysilly to continue with the program. the program can't achieve its outcomes,no matter how good the implementation is. so these live in two places. they live in that resource platform that we callinputs that we talked about a little bit before. and they live in this externalenvironment that we call moderators
or context or situation or whatever. but in both cases, we're talking about thingsgoing on in that environment over which, by definition, we don't have much control. this is the outside environment, but yet onwhich we're dependent because the presence of something is either going to accelerateor it's going to hinder the ability of our activities to turn into our outcomes. what it really does is itelaborates our program logic. so heretofore, what we've talked about is aprogram logic that says "if" leads to "then." if i do these activities,i'll get these outcomes.
when we talked about mediators,we fleshed out the "then" side. holy mackerel, maybe i don't wantto just have two outcomes depicted. maybe that chain has three,four, or five outcomes, and that helps me understandbetter what i'm accountable for. on the if side, we said, "gosh, weneed to do these activities well enough to achieve those accountable outcomes." well, that's what the outputdiscussion was about. the output discussion said,"how do i do this if -- how do i do 'the what' of myprogram in a way that's going
to enhance the chances i'll get 'the then'?" all of that's good in all of thatincredible process use learnings. but the reality for a lot of programs isnot if/then -- it's if, and, and then. if i do my activities as well as i intend and the outside environmentcooperates, then i can get my outcomes. and that's where inputs and moderatorsare so, so helpful to the discussion. so inputs live to the left of the logic model. they're the resource platform on whichi mount my activities and supports. moderators live underneath the program.
at any point in time, it could be that bigsecular factors are going to keep my activities from turning into my outcomes,or my first outcomes from turning into my mid-term and my long-term outcomes. this go through inputs first. again, you can list every inputnecessary for your program. i don't think that's going to help you asmuch in loving your logic model as thinking through big classes of inputs and seeing which inputs make you a littlebit uneasy or little bit queasy. so again, this is the lead poisoning programwhere it's a standard screening program.
let's think about lead poisoning in this case. four classes of inputs that are very common. did i get the money i needed? did i get the staff i needed? do i have the legal authorityto do what i want to do? and then, relationships for medical treatment. in this case, i bold the relationshipsfor medical treatment because i think, while all four of these arevery, very important inputs, the relationships for medicaltreatment is the one
that constitutes almost the killer assumption. remember, in our causal roadmap forlead poisoning, that i couldn't clean up the environment and icouldn't do medical management. all i could do was refer for those things. well, that's a great answer. that's a great way to say, "how am i goingto get reduction of elevated blood leads." but it makes the assumption that i havethose relationships in the first place. if not, it's a killer assumption. the program simply can't be achievedif i don't have anyone to refer to,
or if my referrals land on barren soil. what do i do with that as an observationprospectively, and what do i do with that as an observation retrospectively? prospectively, what i understand is, holymackerel, this program is not yet out of the box and i feel very uneasy aboutthose relationships. well, remember early on with owcd-- did the evidence base exist? if not, it was something we had to create. well, same thing here. why would i initiate this program knowingfull well i'm going to do all these referrals
and they're going to go nowhere andthat i'm not going to be able to clean up the house or clean up the kid? the only two levers i really havefor getting the blood lead down. what i would do is i would stop and askmyself, "should relationships be an input or should the creation of relationshipsbe a really key activity that i move over to the left into the logic model?" retrospectively, as an evaluator,i'm listening carefully. what are the things that are goingto drive this program into a ditch? and it's going to be very clear fromthis discussion that the failure
to get those relationships is going to bethe thing that's going to sink the program. the existence of those relationships is going to be something that's reallygoing to accelerate the progress. that immediately then tells me this needsto become one of my evaluation measures. this needs to become one ofmy focal evaluation questions. just like we say that firefightersrun to the fire, not away from it. logic model builders and the evaluators whouse these logic models run to the inputs. they run to the killer assumptionbecause if it's a killer assumption, i'm not going to get my outcomes.
i can go back to the stakeholder,back to the stem and say, "gosh, you didn't get your outcomes? i don't know why. you just had such great outputs." or i can say, "you didn't get your outcomesdespite your great outputs because you failed to take into account a few of these factors in the outside environmentthat were killer assumptions." in this case, a missing inputcalled strong relationships. now, moderators does exactly thesame thing, but it's writ large.
so the inputs are often things we know of in ourdaily environment, but we don't control them, but we kind of know who does control them. i don't have my own money,but i know who to get it from. i don't have my own staff, but i knowthe folks in hr [human resources]. i don't have the relationships,but i know the people in town i need to develop relationships with. moderators is a little bit different. they're the big, bold secular factors thateither get in our way or accelerate change. and they're so common and they're such aproblem for most program implementation
that there is actually a namefor them called a pest analysis. in pest stands for political,economic, social, and technological. these are the four classes of outsidemoderating factors, the four elements of context that can either get in our wayor accelerate our progress. and political doesn't necessarilymean democrat or republican. it could just mean something as simple as the relationships among thedifferent agencies in town. the relationship between the governmentof a city and what the private sector and the chamber of commerce thinks of them.
economic could mean a global meltdown. it may mean as much, who hasenough money to buy cable, if i'm doing a communication intervention. social is the one we're most used to. if i'm trying to reach audiencex, what are their cultural norms, what are their language norms, etc.? so let's use lead poison as an examplefor what moderators might look like and what we yield from this discussion. so at the bottom in this box, i'velisted three very common moderators,
meaning if you implement leadpoisoning programs across the country, you find that people are makingprogress on their outputs, but they're not all getting their outcomes. invariably, something going on in thatmoderator territory is probably to blame. hazard politics means to what degreedoes anybody in authority care about lead versus toxic mold versus asbestos. so we call that "hazard politics." health insurance coverage -- lead poisoningafflicts primarily really poor kids in inner cities.
some of them will have medicaid. some of them will be uninsured. both of those factors will make itvery, very hard often to find a doctor that wants to take on their case. and the third one is availabilityof new technology. we have to confirm lead poisoning in fairlydetailed ways by drawing a capillary blood draw or a fingerstick, and then confirming itwith a venous stick of a very little kid. and a lot of moms and dads will just say, "gosh,i'm not sure want to put my kid through that." yet, new technology that allowsyou to do it in one step,
just like you would test your glucose levels ifyou're a diabetic, ends all of that and makes it so much easier but it's very expensive. so here's three things not in my control. the hazard politics of my town. the health insurance coverage of my state. the ability of my funding to support thisexpensive new technology, which the presence or absence of may define totally how muchi get blood leads down and keep them down, despite the fact i'm doing just a bang-upjob on my outputs for all of my activities. what do i do with this observation?
i do one of two things. in this format here, which is the traditionalway i used to do with moderators or context, it's what i call the "box of shame." it's all the reasons that i'mnot going to get these outcomes. so don't look to me to get that blood lead downbecause how can i be expected to make progress when there's hazard politics, health insurancecoverage, availability of new technology? well, my friends who are big on thisdiscussion of elaboration of inputs and moderators would say exactly the opposite. they would say, "no, you want toidentify those outside factors.
then what you want to do is you wantto map them to your logic model." and the act of mapping them reminds you that these outside factors don'tnecessarily p the entire program. what they do in a multi-pathway program isthey may sink one pathway, but not another. well, that gives you someopportunities perhaps to ask yourself, "can i get up the mountain knowingthat the northern pathway is out? can i get up the mountain byusing only that central pathway?" so we see that hazard politics certainly isbad, but it kills mainly that northern route. i can refer till doomsday.
the lead source won't get removed until thetown has dealt with toxic mold and asbestos. insurance climate is really bad. it essentially kills that pathway. i can refer for medical treatmentas much as i want. there is no one in town that wantsto take on another uninsured kid. the technology doesn't kill the whole program,but it does kill the ability of screening to identify those kids with ebllin a quick and efficient fashion. now, lead is not the best exampleof this, because we said before, the nature of this program issuch that all three pathways have
to happen and happen concurrently. but you can see, in a program with multiplepathways where that wasn't the case, knowing, "boy, oh boy, i can think ofsome things i can do as a program to improve insurance climateand to buy that new technology. and then i can think of a way to scale thisprogram and to target this program that, even though the hazard politicsare not in my favor, i can make it over to reducingelevated blood leads." so just like with inputs, we usethis observation both prospectively and retrospectively.
if the program isn't out of the box, and i layout these moderators, and i say, "boy, oh boy, a couple of these are really going to bekiller assumptions," it tells me right now, just like it told me before with relationships,why would i bother with this program? i can't get anywhere past referring for cleanup,anywhere past referring for medical treatment. and i might even lose half the kids i screenedbecause their moms and dads wouldn't stay around for the confirmatory venous blood draw. what actions can i take thatwill mitigate those moderators? in which case, they're notmoderating factors anymore. they come up into the model as very, verystrong outcomes that drive my program
down to reducing elevated blood leads. again retrospectively, as an evaluator,i'm listening to this discussion and what i'm hearing is threemore evaluation focused questions. was the hazard politics alignedin a supportive direction? was the insurance climate supportiveenough that you could find people to take on the kids we identified? and were we able to procure that very,very rapid technology for assessing kids, such that we didn't lose a whole bunch of peoplebetween the initial and the confirmatory screen? so as with everything we're talking about,there's a prospective and a retrospective reason
for doing this, and that's what makes logicmodels so successful in this cpi or cqi space. it helps us prospectively and immediatelyin a process use way identify ways to make our program better byredesigning it or doing workarounds in the case of inputs and moderators. retrospectively, as evaluators, werun toward the killer assumption. even if i haven't had the prospectiveopportunity to make those changes, i know that what's going to sinkthis program or these inputs and these secular factors we callmoderators that might get in the way, and so i'm going to measure those so thati know that the program doesn't work,
i have something to say to stakeholders,funders, and other people that matter. i did the world's best job on this program. look at all those outputs? i didn't pay attention or icouldn't conquer this outside factor. i'm on the case. i'm going to do it next time,or i can't conquer this. it's not worth trying to solve thisproblem at this time in this community. so in closing, what were our takeaways? it's never about the model.
it's never about what the model looks like. it's about understanding your program. and sometimes that can be in a circle. sometimes it can be five boxes. sometimes it can be 15 boxes. sometimes it has all kinds of arrows. sometimes it doesn't. number two, most and the bestbenefits may be process use benefits. getting people on board aboutevaluation often is enhanced
by showing them how theseearly things that happen in an evaluation process actually benefit them with immediate process use insights theycan initiate and implement immediately. number three, logic models help us cut to thechase on the underlying logic of our program, especially in cases where we have many, many competing definitions ofthe program floating around. number four, almost all the benefits oflogic modeling come early in the game from very, very simple logic models. it doesn't mean that there's noutility to elaborating the logic model,
including terms like inputs andoutputs and mediators and moderators. i just showed you how muchmore benefit that can be. but a lot of the value of logic models --sphere of influence, sphere of control, what's my accountable outcome -- comesfrom these very, very simple logic models and you then don't risk losingpeople to logic model fatigue. and number five, form always follows function. how accurate does the logic model need to be? it depends upon who's usingit and for what purposes. it's not very long before the model gets toocomplex or complicated to generate the kind
of discussion or communication orconsensus-building that you're trying to do when you've developed the model. next steps -- if this has piqued your interest,we have a companion webinar that looks very much like this, but hones in on how to usethis modeling and road mapping approach for strategizing and strategicplanning, and that's on our website. we've turned this class and the companion classinto cdc u practicum, and we'll be offering that several times in calendar year '17. just go to the cdc learning portal to find it. we're developing a self-guided manual for peoplethat don't have time for an all-day practicum
but need something a little bitmore than a 90-minute webinar. and then, on any of these issuesthat we talked about today, we're always available for more information. just contact myself, my deputy,dan kidder, or go to our websites,
both the intranet website [for cdcemployees only] and the internet website, and you'll see these webinars andyou'll see a host of other resources that will help you answerthese kinds of questions. thank you very much for your time andattention, and we hope that this was useful.