ENG 301 4.F: Making Graphs, Writing Findings, Thinking

Today’s Plan:

  • Making Graphs
  • Writing Up Findings
  • Data Analysis with ChatGPT
  • Thinking (about Discussion)

Review

We’re writing a professional report. It will have the following sections:

  • Objective
  • Methodology
  • Findings
  • Discussion
  • Conclusion

Methodologies Email

Got a question about methodologies, thought I’d share my response:

Yeah. Our coding scheme is based on B&Ls, but has been transformed through different iterations of this research project. So we have to tackle that history.

We also have to explain what a coding scheme even is. Did you know what qualitative coding was before you took this class? Maybe, but probably not. So how can we explain it in an accessible way.

A real challenge here is you have to juggle audiences. I have made this intentionally confusing and ambiguous at this point. In an academic/science/governmental/grant kind of report, you have to be incredibly detailed here. This are “prove you REALLY know what you are doing situations.” For a more public-facing report, we want to be careful not to drown our reader in superfluous details.

I have made this confusing by *not* telling you which audience to prioritize. Why? Because I want you to try and do both. At the same time. You need to provide quick overview glosses *and* more extensive details. At some point, we’ll start talking about word count, and I will keep trying to make you do more with *less* words.

But one way to juggle the accessible / detailed dilemma is through the use of appendices!

Making Graphs

We have data! Now it is time to turn this data into graphs. Ideally, we would have done this Friday in the lab, but that didn’t happen. So I will demonstrate and point you towards some resources.

The easy part will be generating our graphs. The more difficult part will be revising our data to make a more rhetorically engaging draft. Remember that human attention is limited. Our job, as professional writers, editors, and designers, is to capture that attention, hold it for as long as we can, and make sure we pack every phenomenological second with meaning. We need to create hierarchy in everything we write and design to help maximize communication.

Initial Steps:

  • Step One: Make a Copy of this simple spreadsheet
  • Google has documentation for making a graph. Let’s run through the formatting options.
  • Make sure we title our graph Figure 1. Some Short Title
  • Does our graph need a legend?
  • Make sure labels are useful and legible

Revising our Graphs:
Now let’s look at our graphs like rhetorically informed information designers. How can we make them more impactful? Meaningful? What data do we NEED to share? [Note: this will be an even more recursive process once we run through the discussion section material; what content do we need? And this might help us think through some design possibilities.

Explicating Our Graphs
Next week I will talk about how we write about our graphs and complete the findings section. It is both kind of boring and also kind of hard. We will tackle that next week.

But first, the fun stuff! It is time for thinking!

Discussion Section

This is the part of this project that I really enjoy. Previously, we haven’t really done a lot of thinking. We’ve done a lot of stuff, but I wouldn’t call any of it creative or engaging. That’s the labor we have to do to get to the fun stuff. Now that we have some data, now that we’ve made some knowledge, we have to think about what it means. It is time for analysis.

I’ve put together a Google Form to help facilitate some analytic brainstorming. This is a heuristic, a series of questions about our data that can help us better understand it. I’ll ask you to complete this form over the weekend. I will anonymize and share responses on Monday.

I want you to spend about an hour working on this; there’s an extra-credit opportunity that can earn you a full point on your final grade (this has been added to the extra-labor section). I imagine the extra-credit opportunity would take you another hour.

Required Discussion Brain-Storming:

  • Compare our results to Brumberger and Lauer (and, if relevant, put in conversation with Lauer and Brumberger). Pick a code or related cluster of codes. How do our quantitative results compare with B&L’s results. NOTE: what makes this tricky is that B&L used a really weird scale or bubble to present their data. But you can still figure this out. Also, you can use codes here to think about how our data speaks to the notions of “responsive writing” and/or “hybrid editing” that L&B describe in the later 2019 article.
  • Compare our results to some of the results from my previous iterations–what’s different? *Why* might our results be different?
  • Looking through our data, I want to to identify a couple significant codes. First, what is a class from which someone might learn a specific tech/comp/characteristic? Second, what is a non-class way a UNC student might develop a coded proficiency?

Analyzing Data with ChatGPT

This year I tried using ChatGPT to collate the codes, and it worked. Here’s a link to my ChatGPT session.

While I was working to simply measure code frequency, ChatGPT offered me other options for data analysis. It wrote:

It seems you’re providing multiple sets of codes for analysis, likely for a research or classification project. Without access to my more advanced data analysis tools, I can still help with a qualitative summary or patterns from the given list.

To help, I need to know what kind of analysis you would like:

  • Frequency count: How many times each code appears.
  • Grouping: Are there any natural groupings or clusters within the codes?
  • Co-occurrences: Which codes tend to appear together?
  • Patterns: Any patterns over time or order of appearance.

Let me know how you’d like to proceed, and I can assist further!

Two things here:

  • First, I was on the free version of ChatGPT (4.o-mini). That’s what it means when it says it cannot access its advanced data analysis tools. If you don’t use ChatGPT often, then it will auto to the more powerful version of ChatGPT. I don’t know what those advanced options are.
  • Second, we can see that there are other options available

For extra-credit, I would like you to explore those other options. But I would also like you to think about what data to supply ChatGPT.

Here is a link to the raw data sheet, sorted by job type. A simple way to think about how to target more meaningful data is to look for patterns / clusters in a specific job type or, say, two types–writing and editing for instance. What do the numbers look like if you cut out social media jobs? Or what happens to the numbers if we only look at hybrid and social media jobs? What percentage of jobs have which codes?

To get the extra labor points for this, I want you to use ChatGPT to do some analysis. Select parts of our data, feed it in, and ask it to do something other than just frequency. I’ve added a link to a Google Doc in the extra-labor assignment; share the results of your labor there.

Print Friendly, PDF & Email
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.