What I Think #studentaffairs Assessment Should Look Like

  1. We need to track individual student’s attendance at events over the course of their time at the school.  I’m not particularly interested in doing this for any purpose other than to see that student affairs programming is actually reaching all of the students.  I suspect we’ll find that we’re hitting a small portion of the population.  To my knowledge, the infrastructure does not exist to do this, but what I’m seeing in my head is an enormous spreadsheet with individual student ID #s and tracking of what events they attended.  Frankly, this could be an entire student affairs (as well as athletics) effort to figure out what experiences our students are actually attending.
  2. Learning outcomes listed and justified for EVERY event.  I posted about the CAS standards here previously, and i’m thinking that you could list out individual learning outcomes for each event and then track these over the course of the year.  In theory, each office (even better if this was a collaborative SA effort) should be hitting all of the learning outcomes repeatedly.  Being able to cross reference these to determine what portion of our students are getting hit (not just shots in the dark, but actual individual students) by each learning outcome will give you some idea of what’s actually happening.
  3. Cost per student for each event.  If you’re tracking who attends, you should be able to get actual attendance numbers instead of estimates as well.  Calculating cost per student will help to determine whether students are actually getting the value that they should be getting out of their student fees.  A healthy look is to determine how much a similar experience might cost elsewhere; if cost per student is lower, you’ve done your job.

A few thoughts:

  • This plan places the responsibility on staff for accountability instead of surveying students to determine whether they’re engaged.  We’re responsible for creating an environment for student learning and this plan tracks whether we’re actually creating that environment.
  • For the most part, student affairs learning/community building is tracked over a longer time frame than classroom learning.  You can definitely learn chemical structures (okay…maybe you can’t…but i did) over the course of a day of studying.  You can’t learn how to have meaningful relationships over the course of a day.  Short time frame assessment, in light of this thought, is rather pointless and this system would provide the infrastructure to do a more meaningful long term study.
  • Self reported assessments of students are of marginal value anyway.  Incentive exists for students to either not take these seriously, say what they think the surveyor wants them to say, or outright lie.  In light of the incentives, the data received from these assessments (unless you’ve managed to limit these incentives somehow) is questionable.
  • Tracking financial expenditures with more accountability for said expenditures is imperative.  I’ve heard SA folks refer to activities fees as “play money”.  Please.
  • Tremendous research opportunities would be made available by tracking all of this data.  I think we all know that’s needed.

I’m genuinely looking forward to reading the comments.

About these ads

39 thoughts on “What I Think #studentaffairs Assessment Should Look Like

  1. I am in the process of designing a study on programming learning outcomes for my stats camp project. We have an intensive week long program for faculty and for the 1st time it is open to staff. It is competitive to get in so I am hoping my application will be accepted so I can refine what I want to do.

    Each of our housing programs need to fit into at least one of our five learning outcomes. My goal is to measure what we planned to cover and compare it with what we actually covered in the program. And then randomly choose a few attendants to see if they saw those outcomes within the program.

    I am concerned that we are not quantifying how we use the outcomes. I am also reading Upcraft and Shuh’s Assessment in Student Affairs as well as Shuh and Associates Assessment Methods for Student Affairs so I can expand assessment in my area.

    I am looking forward to the continued discussion. Thank you for blogging about it.

    Laurie Berry
    Director of Housing and Residence Life
    University of Southern Indiana

  2. Hi Jeff, Thanks for this post I think it articulates what (hopefully) many of us are thinking about assessment issues.

    Tracking individual attendance is something I’ve wanted to do for a long time and it really isn’t that hard. I built a small database that would record student IDs when they swiped their card on a small laptop. My only sticking point was getting a list of Card #s to Student ID s from IT. So I think it’s doable and not that hard if an honest effort is made. The data we could collect is amazing. Relationship of attendance to conduct, are certain students attending events together, etc. I also look at it from a marketing standpoint. If we could see that a student attended two career workshops, we can email them about a third they might be interested in.

    You talk about learning outcomes and I’ve been wanting to see more surveys that relate the the outcomes for a program rather than generic satisfaction questions. There should be at least two learning questions for every survey along with basic satisfaction/demographic questions. I moved this way for RA training evals and there was some push-back but I think if students see they are being tested it might create a stronger sense of learning engagement. Yes there is room for some fun events (Snookie?!?!) but the majority of our events should expect some form of learning to result – even if that learning is social skills and relationship building.

    One question I have is what we can do as a profession to get more value given to our learning activities. Courses received credit because they are highly planned and peer reviewed. Would moving in that direction give more value to student activities?

  3. Jason, slamming comment man! I want to address a few things….
    -completely agree that it SHOULD be easy to get that data as long as you can get IT on board. In theory, creating a comprehensive plan like this and passing it up the chain might provide the political clout to get it moving along, but often, things are not as easy in practice as they “should” be in theory.
    -RA training is an entirely different bag than what I’m talking about. Much like a programming boad, RA training is a closed group with defined things that they should get out of the experience. It’s much more analogous to a classroom situation and thus should be evaluated in a similar fashion. The outcomes for an open invitation program might be a little more nebulous but are nonetheless valuable (frankly, you’d be surprised, I think, of the outcomes that a talk by Snooki might have). If I remember correctly, you’re in housing; I’m in student activities, we just have a different bag of programs but we nonetheless provide experiences for student learning. I’d encourage you to check out those CAS standards (http://jefflail.com/2010/04/14/quick-review-of-cas-standards/) and try to think about how Snooki might provide some value to those in attendance. I swear there’s value to it, regardless of your feelings on Snooki.
    -As afar as getting credit for student activites….interesting philosophical question and I’m with you. However, I think we’re at about step 30 and that’s at step 100. Interested to hear other folks’ opinions.

    thanks for stopping by.

  4. Jeff, I think you’re completely on target here. When we report lump sum contact hours or total numbers of students who participate, we lose sight of how many repeat attendees we have at our programs and events. While there’s nothing wrong with that, we’re creating false advocacy for our programs. The 638 students who attended programs during fall semester may actually be the same 20 students over and over again. Where are the rest of my students, and what are we doing wrong?

    Our department has learning outcomes for every program we do and all are based on CAS standards. The student leaders are asked to explain those outcomes before student evaluations are handed out so participants have some sense of what the goal was for the program. Too often we assume they “get” the point just by attending, but if the program isn’t well planned or executed, the actual message may be lost.

    I’m glad our field has people like you who are willing to push us to look at these issues and keep making changes for the better. Thanks for the thoughtful post — I’m going to bring this back to my office and engage in some discussion of how we can better track our own data.

  5. Hey Laurie,
    Thanks for responding to the email I just sent you so quickly; it will help me not look like a dummy in this response to your comment!!

    -Stats camp sounds like a blast. I’m hoping you’ll share some of what this is all about in some form or fashion. I’m curious what thoughts you’re having on the current status of the programs done in your department as you’re preparing your project.
    -I know this was kind of a throwaway bullet point (bullet point 3) in my post but I am genuinely concerned by the surveys that we do after events. I’m just not sure that the data received from these surveys is of much value, since it’s nearly impossible to detach those incentives I mentioned from the process. I’d love for someone to propose a way that they’ve negated those incentives, but until then, I’m highly suspicious of self reported survey data.

    good stuff. Looking forward to hearing more about stats camp :)

  6. Hey Stacy,
    Yours and Jason’s comments have made me realize something I missed….
    I think assessing student leaders, program boards, RAs, employees etc is a whole different bag. These are closed groups of students who have “signed up” for a year long of intensive mentoring, effectively. I’d say this is semi-analogous to a classroom and could be assessed in a similar manner. However, I think it’s still subject to that self reporting bias that I mentioned in bullet point 3, so that needs to be a caution in any data collection.

    • Great point, Jeff. I wasn’t actually referring to assessment of student leaders, but rather the assessment we conduct of their planned programs and events that are open to all of our residents.

      Assessment of RAs *is* a totally different bag, and one that requires a comprehensive departmental assessment strategy to conduct effectively. It combines program evaluations, resident feedback, and performance evaluation tied to learning objectives and job description just on the surface level. That could be an entirely different thread. :)

      • Umm…yes. Agreed. *cough cough*…think you just volunteered for that one. *cough cough*

        Your post reads totally different now that I understand your context.

        I actually don’t place a burden on my students about learning outcomes for their programs. Given that I work with a program board, we talk about large scale goals for campus programs (community building, diversity, keeping students on campus, retention) but don’t go into CAS standards because ultimately it’s their money and their choice to do what they will; it’s my job as a professional to make sure these programs are fitting a purpose and to think about it “next level”. Maybe I’m wrong on that one,but that’s my two cents.

  7. Jeff,
    Nova has something like this. We only track first year students. It’s called the NSU experience but we (SA professionals) see it as red, white and blue charts. We try to get the students from red to blue. Red means not engaged, white means: have been to one event and blue is: member of a club. We look at where the students are every 6 weeks. We have huge spreadsheets and their ID # is on it. I think we track every event on the sheet but it could just be the main ones. Anyway we have a tracker and students are suppose to request it for their event. Students swipe their ID card and it tracks them. Again, this is only for first year students. We are working on tracking everyone. All the RA’s and OL’s push their first years to go to at least one event. They even walk over with them. it’s a work in progress.

  8. Hi….

    I think that’s an ideal format for assessment but we also have to have the conversation about campus-wide support. If we could put the pieces you describe into effect, I would hope that things like tracking participation by student and not by program could be supported campus-wide. Even our academic colleagues haven’t been able to crack the code on effective assessment support and strategy and they have been at this a lot longer than student affairs has.

    We made the leap to learning outcomes a while back and instead of making assessment clearer to me it’s actually made it even more unwieldy. There is just so much to be done… any thoughts on how we can attack this to get the timely results we need? Common advice is to pick isolated projects each year, but I don’t want a program to exist for five years before we know it’s not making an impact.

    I think a centralized student affairs assessment staff member is one of a few answers…the others have to do with professional development that is accessible to all levels and even more assessment resources that are practical in nature. (i.e. more on “assessment programs for university unions”, etc. instead of “intro to survey design”)

    Thanks for this post!

  9. Kelly….
    I’m curious about how folks think this is going. I like the goal and I think it’s a start, but are there plans to expand the initiative? I love the basic idea. Obviously it needs some tweaking but you’re heading in a great direction!

  10. Disclaimer: I’m in academic affairs, so most of the work I do with students takes place either in advising appointments or in the classroom. I also rely on my ed tech background – again an area that mostly focuses on learning in the classroom or in training. BUT, I think these thoughts can translate to student affairs programming…so bear w/ me.

    1. We need to think of assessment in a broader sense of time. Mostly what I see in student affairs is what is considered summative assessment – or assessment that takes place at the end of an event. After the learning takes place. But w/out any long-term follow-up. What about formative or confirmative assessment?

    Formative: When we are planning events or learning activities, at what point do we pilot test our ideas or have an outsider review them? What type of learner analysis do we do – i.e. learn more about our audience and their needs? How might the environment factor into the event (e.g. room, location, time, other distractions)? How do we determine on a small-scale that our activities – while in the planning stages – are going to meet our learner objectives?

    Confirmative: How do we know in the long-term (long past the end-of-event survey) that we did what we said we were going to do? I think this part addresses your idea of tracking learning over a longer period of time – it’s not just a one-shot deal.

    2. We need to think of assessment as more than just satisfaction surveys – which I know you agree w/ here. Not to get super-nerdy and deep, but check out Kirkpatrick’s levels of evaluation: http://www.businessballs.com/kirkpatricklearningevaluationmodel.htm

    Kirkpatrick has 5 levels of evaluation (4 are listed on the link, but he added Return on Investment as the 5th level). Satisfaction / Reaction is at the bottom of the evaluation chain. Again, this model is really built for training, particularly corporate, but think of how this model could be adapted for student affairs work. In fact, I think that’s an article idea…anyone want to collaborate?

    P.S. In the state of Florida, student government at the public universities controls the activity fees, not student affairs staff. At UF that amounts to 16 MILLION each year. Discuss.

  11. Jeff,

    Thank you for giving us a great place to start the conversation on assessment. It’s nice to see conversation going beyond attendance numbers and satisfaction surveys. If all we’re doing is putting butts in seats and making happy, it’d be no-wonder Student Fees are seen as play money.

    I think we all agree that we cannot expect every program to meet every CAS standard, but that a year-long schedule (or maybe 4 year experience) should repeatedly provide these opportunities to students. Another place to look for achieved learning is from our programming boards and others who are essential in the planning process of our events. Let’s continue the Snooki example: the students in the audience may be attending what they perceive as a fun program, but what learning occurred behind the scenes for the students coordinating the event?

    @Melissa, thanks for linking to Kirkpatrick, a perfect resource for this conversation!

  12. thanks Melissa for asking some other good questions. I recognized all of these things were issues before i wrote this post and I still think they’ll be issues if we did what I’m suggesting. Great points and good reads to get folks thinking.

    I like my plan because it’s simple, it asks straightforward questions that are built on building data, and once the infrastructure is in place, the questions are rather easily answered. And I also like it because, near as I can tell, outside of Nova Southeastern, no one’s getting answers to these simple questions.

    While I think what you bring up is relevant, I think, as an academic affairs person, you’d agree that everyone has problems with getting good answers to the questions that you’re discussing. Tracking whether students are learning anything from programs and how to best figure it out is a problem in any environment where learning takes place (No Child Left Behind, anyone?)

    So while I agree with you and agree you make a point, I think you missed the point of my post. I want to stop assessing the students as actively and start assessing the efforts of the staff to reach the students. Our data at present is faulty and limited, we are hell bent on justifying ourselves and ill equipped to do it. My plan clears the murky waters and starts developing a foundation for asking whether the efforts of student affairs are worthwhile and working.

  13. Hey Becca,
    The learning of the board itself is a completely separate conversation from the learning of the attendees. I’m addressing here the learning of the attendees (should have clarified that in the post). I absolutely do not think that there is an event that students don’t get anything out of, but maybe i’m alone on that one. And if our students are only planning programs that meet certain outcomes, it’s the responsibility of staff to consider filling those gaps. BUT thats where the whole tracking piece comes in…so, yeah.

    • I agree that there is questionable reliability it any self-assessment, but I do see the value in asking students to define their experiences (through blog, video recap, survey, or otherwise). While I agree that student is not going to master “meaningful relationships” by attending any single program, the change in their responses or competency can be measured over time, especially considering your proposal to track student engagement over time.

      If we as staff are outlining learning outcomes for our programs (probably still a first step in a lot of cases) how are we assessing their actual measurable success? A student who attends 3 programs that address CAS standard X cannot be assumed to have learned it. It sounds like you are proposing a plan that programs to meet several outcomes, but isn’t necessarily measuring their attainment.

      • I think having this data would make all of our other efforts better. You have to understand, ever since I started in this field, I’ve felt like the only data person in the room and continue to see faulty conclusions and justifications based on horrific data. I’m proposing a way to build the data so we can make better the assessments your’e talking about as well as possibly have MANY other applications of said data.

  14. Ok, point taken – maybe an intro to your post would have helped clarify your original purpose for posting rather than just jumping into suggestions. And now that I know, I have to say I kind of disagree w/ your intent.

    You want to assess the efforts of staff to reach the students? Isn’t that a similar line of thinking to assess K-12 teachers on how much their students learn? Without considering other environmental factors, not to mention the student’s own motivation? We can’t force students to come out to events. We can’t hold their hands. Students need to be accountable for taking advantage of the products of their student activities fees, just as staff need to be accountable for spending those fees wisely. And as I mentioned previously, not all schools are in charge of those fees to begin with.

    Yes, in these times where student affairs folks are nervous about reorganization particularly, some ROI data to justify themselves feels needed. But we can’t completely ignore the fact that these students are ALL adults, and many will choose not to participate in programs. Is that necessarily “our” fault?

    I’ll also admit that I just assume that all of us are acting w/ integrity and trying our hardest to spend wisely, plan purposeful programs, etc. Perhaps I’m naive there, but again, not my area anymore.

    • Let me just summarize the conversation this way…

      I think you’re reading this as a comprehensive assessment plan. It’s not. It’s an end I think we’re not thinking about it, but if we did, I think it would be foundational to these other pieces we’re trying to do. Think forest, not individual trees.

  15. This is a really great post! I especially love the idea behind the first point – something that I don’t think would be that difficult technically, more getting everyone to buy into it is the hard part.

    So many times when looking at the larger student learning outcomes that are carried across many programs everyone wants to attribute the learning to their program. There could be some really fun econometrics used to find better correlation (not necessarily causation) between programs and attendees if you knew all of the different activities they attended over time!

    Now back to finish reading all the other comments!

    • Thanks for visiting, Evan! I concur with all that you said. I highly doubt we could find causation as well but correlation would be a START.

  16. Jeff, great post and great conversation. You’ve clearly started a great conversation here. The question is, how will you synthesize this conversation into an educational session, webinar, follow up post to help create some action steps for folks here and beyond who are interested? I personally think you should do all three :)

    I echo your comments above about assessing student employees differently than student leaders, and even those who may serve as RA’s and/or Orientation Leaders. I’m glad that ACUI has developed core competencies, but I think we need to push the envelope a bit more on how to assess those competencies, especially when it comes to our student employees. I know ACUI is working on something right now, so we shall see. The program review process this year at BSU opened my eyes to many things we can be doing better in the Union student employee assessment area.

    Thanks again!

    @EdCabellon

    • Since you’ve mentioned it, I grabbed this book from ACUI when I started managing student employees and it was EXCELLENT. http://www.acui.org/content.aspx?menu_id=30&id=9213 It provides some real resources on how to think about assessing your student employees. Mine is full of highlights.

      Here’s the piece I’m crowdsourcing – the rest I’m already implimenting – any words of advice on pulling IT in to help with getting this data coordinated? I’ll say I’ve found this very challenging.

      • Great post Jeff.

        What if we also took the assessment piece for #3 (Cost per student) and added some transparency to the accountability. This idea is not my own, but rather one that came up in a conversation similar to this one. Add a statement to events that $total of your ___ student fee was used to put on this event. Or divide the total fees spent by the total students who pay to get a fee/student basis (ex: $0.25 of your ___ student fee) was used to put on this event. Then people can see the value in terms of what they pay.

        Assigning learning outcomes to every program is a great first step, I like the idea of tracking them to see where programs are heavy or light on outcomes. I would also say that taking it another step should be looking at those outcomes across various types of programs (diversity, entertainment, wellness), particularly as it relates to student programs, if all the “learning” comes from say entertainment programs, have we really helped students in that outcome to be best of our role?

        I agree that short term assessment is not great given your example of meaningful relationships and the problems with self reporting. However, given that students are over surveyed already, what might be a better tool to measure this learning.

        Great conversation!!!!

  17. So I’m late to the party, but glad to see this conversation happening.

    The measurement tool is what I want to talk about here. While it provides tons of great data, and is relatively easy to set up, it takes so much time and energy to manage and facilitate. We started a program at Vanderbilt this year to track attendance and reward points at different campus events (http://www.vanderbilt.edu/dorerewards/). While it’s a great idea, it really requires essentially at least a half-time person to ensure that the machines are calibrated for the events, staff members are there, marketing is done to make sure students know where to swipe in, not to mention the data analysis. We have to go through a third-party vendor to run the software, and it’s just a chore. It’s taken a year to get clearance through the card office, to purchase the equipment, to learn how to run the programs, so just the data collection itself is a lot to take on.
    Our program is modeled off the TallyCats program at Kentucky (http://tallycats.ad.uky.edu/) which is far, far more advanced than our fledgling venture, and I would be interested to see if they have more data analysis than our simple incentive program.

    That said, great ideas, Jeff, and great conversation all. Echoing Laurie, if you haven’t read Schuh & Associates “Assessment Methods for Student Affairs”, get to reading.

  18. Hey Jeff,

    There is some interesting stuff going on in the comments here and I think it’s awesome your post provoked so much discussion. I think you bring up a really important point in your first suggestion and I agree with it- I would guess that many of the programs our offices hold for students are being widely attended, but by the same smaller percentage of students. I think there’s a legitimate possibility that student affairs divisions are providing a statistically small group of students with learning outcomes-based programming. I do see a lot of assessment initiatives, however, that focus heavily on attendance- we do it in our own office, actually. Every program we host, students have to swipe in to enter with their student id card- this gives us access to both individual points of contact as well as repeaters. This data is incredibly helpful, but also comes with challenges of its own. As your second point suggests, we can’t rely on attendance alone- even if events are reaching a wide audience- to determine a program’s success, we need to address whether those programs are DOING anything and if students are learning and growing as a result.

    While I think it is great that our field really seems to have begun embracing the importance and necessity of assessment, what I think frustrates me is the lack of directed purpose to it all. I think outcomes based assessment is critical in program evaluation, but also think that before assessment design is even undertaken, the SA professional needs to set out some guidelines and goals: What types of data are being gathered? Who is the intended audience of the results? How and when will the results be communicated? Why does the assessment matter? I feel like if those questions can’t be answered prior to undertaking an assessment initiative, it’s not worth devoting the time and energy, or potentially fiscal resources.

    Thanks for writing and provoking discussion.

  19. One other thought occurred to me today regarding assessment and tracking students. This is easily done in a closed venue (ex: Ballroom, theatre, etc) with controlled entry/exits. But what about those come and go events, where even incentives are not enough to get people to check in, example student org fair on the outdoor commons? How do we capture those students? Also are there separate outcomes for those students staffing tables and recruiting versus those seeking to join an organization.

    So many questions….but the fact that this post has me thinking for over 24hours later says something.

  20. Interesting discussion! As a director of student affairs assessment, I invite you to review many of the resources in my office as well as our assessment plan for the division. I also suggest looking at ACPA and NASPA (the SAAERKC in particular) and SAAL to gain some additional insight. Each June, there is also a wonderful Assessment and Persistence Conference. One last thought…regional accrediting agencies are also a good place to start. Some tend to be fairly prescriptive on what assessment at institutions of higher ed should look like. Thanks, Jeff!

  21. Pingback: Asking the Right Questions Makes All the Difference with Technology….And an Update on Our Assessment Efforts « Jeff Lail

  22. Pingback: Student Affairs and Assessment – The Tale Continues | Jeff Lail

  23. Pingback: Innovating Assessment and Data Collection « Emergent EDU

  24. Thanks to your marvelous submitting! I critically enjoyed looking at it, you may be a excellent author. I will always bookmark your blog and probably will come back at a later date. I wish to encourage you to ultimately continue your own great content, have a good weekend!

  25. you are really a just right webmaster. The site loading
    velocity is amazing. It seems that you’re doing any unique trick.
    Moreover, The contents are masterwork. you have done a
    excellent activity in this matter!

  26. I have been exploring for a bit for any high quality articles or weblog posts in this sort of space .
    Exploring in Yahoo I ultimately stumbled upon this site.

    Studying this information So i am glad to convey that I have an incredibly just right
    uncanny feeling I discovered just what I needed. I most certainly will make certain to do
    not overlook this site and give it a look on a continuing basis.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s