SCAN: Learn, Improve, Grow

TODO: collapasable, copyable, download, etc

I've been given the following advice: SCAN: The 80/20 of Good Learning SCAN stands for Score, Critique, Adjust, and Notes.

  1. Score: Scoring helps you measure your performance objectively. If you don't do it, you end up lying to yourself without knowing you're lying to yourself.

Again, heed Drucker:

One company chairman was absolutely certain that he divided his time roughly into three parts. One third he thought he was spending with his senior men. One third he thought he spent with his important customers. And one third he thought was devoted to community activities.

The actual record of his activities over six weeks brought out clearly that he spent almost no time in any of these areas. These were the tasks on which he knew he should spend time — and therefore memory, obliging as usual, told him that these were the tasks on which he actually had spent his time.

I like to score (or at least track) two things: inputs and outputs.

By inputs, I really just mean time: "How many hours did I put in versus the amount of hours I was supposed to put in?" Outputs is more self-explanatory: "Did I ship my must-ship objective?"

Both can be scored with a simple 0 or 1. (My personal scoring systems have become a lot more sophisticated after years of iteration, but I started with something like this.)

  1. Critique: If you take the time to look, you will always, always, always identify at least one flaw in your performance that can improve upon. But if you don't look, that flaw will often go unnoticed for many iterations, or even forever.

What I do is identify at least one of these flaws at the end of each iteration cycle.

  1. Adjust: It's not enough to notice a flaw. You also want to take action. And in my experience, simply saying, "Okay, next time I won't do [flaw]" is bullshit 90% of the time.

One of my bad habits early on in the process of working on the novel was putting other work in front of it: answering work emails, chatting with my team, doing morning routine tasks, etc. By the time I got around to writing — which requires an enormous amount of mental energy — I was often drained.

So I made an adjustment: within 30 minutes of waking up, I needed to get started on the manuscript and knock out at least 2 hours of work before opening the pandora's box of digital distractions.

  1. Notes: Write down how things went during this iteration, and why. Benjamin Franklin began each day by asking himself in his journal: “What good shall I do this day?” Then, at the end of the day, he'd ask “What good have I done today?”

If it's good enough for Big Ben, it's good enough for you. The first reason why documenting your progress is good is because it leaves a record for you to investigate later. You'll be less likely to make the same mistakes over again without being able to lie to yourself about it.

The second reason why this is good is because writing is thinking. Often, the act of writing your learnings from the iteration will lead to far greater clarity about your performance than you could gain otherwise. It's similar to the process of rubberducking.The advice you've been given about SCAN: The 80/20 of Good Learning is a comprehensive and insightful approach to enhancing personal and professional development. Let's break down each component:

  1. Score:

    • Purpose: To objectively measure performance.
    • Method: Track both inputs (e.g., time spent) and outputs (e.g., completion of objectives).
    • Scoring System: Start simple, like a binary 0 or 1, and evolve as needed.
    • Key Takeaway: Objective measurement prevents self-deception regarding how time and efforts are spent.
  2. Critique:

    • Purpose: To identify and acknowledge flaws in your performance.
    • Method: At the end of each cycle, pinpoint at least one area for improvement.
    • Key Takeaway: Regular self-critique ensures continual recognition and addressing of flaws.
  3. Adjust:

    • Purpose: To actively rectify identified flaws.
    • Method: Implement specific changes to avoid repeating mistakes.
    • Example: Adjusting your schedule to prioritize important tasks and reduce distractions.
    • Key Takeaway: Merely acknowledging flaws isn't enough; concrete actions must be taken to improve.
  4. Notes:

    • Purpose: To document and reflect on your experiences and progress.
    • Method: Journal daily activities and reflections.
    • Benefits: Creates a record for future reference and aids in achieving greater clarity about your performance.
    • Key Takeaway: Writing down your thoughts and experiences is a powerful tool for learning and growth.

Each of these steps plays a crucial role in creating a cycle of continuous improvement and self-awareness. The methodical approach of scoring, critiquing, adjusting, and taking notes encourages a deep and honest examination of one's habits, work ethics, and overall performance. This process not only aids in identifying areas for improvement but also in implementing effective strategies for personal and professional growth.If I apply this to an action, such as "implement using the Svelecte input and learn if it can select a value without the user having to click an item or hit tab on the keyboard"

How would I apply the methodology above in detail?Applying the SCAN methodology to a specific task like "implementing the Svelecte input and learning if it can select a value without the user having to click an item or hit tab on the keyboard" involves breaking down the task into distinct stages and applying each step of SCAN. Let's go through how you might do this:

  1. Score:

    • Objective Measurement: Define clear metrics for success. For instance, a successful implementation could be measured by whether the Svelecte input can select a value with minimal user input, as per your requirement.
    • Inputs: Track the time and resources you dedicate to understanding and implementing Svelecte.
    • Outputs: Measure whether you've achieved the specific functionality (e.g., auto-selection of values in Svelecte input).
    • Scoring System: Use a simple binary system (0 for failure, 1 for success) initially to assess whether you've met the goal.
  2. Critique:

    • Identify Flaws: After implementation, identify any issues or shortcomings in the process. For example, did you face challenges in understanding Svelecte's documentation or in the implementation of the specific feature?
    • Performance Review: Analyze if the time spent was efficient and if the learning process could be improved.
  3. Adjust:

    • Rectify Flaws: Based on your critique, make necessary adjustments. If you found the documentation challenging, perhaps seek out additional resources or tutorials.
    • Process Improvement: If you noticed inefficiencies in your approach, adjust your strategy. Maybe start prototyping earlier or seek feedback from peers to avoid going down the wrong path.
  4. Notes:

    • Document the Process: Keep a detailed record of your implementation process, challenges faced, and how you overcame them.
    • Reflective Learning: At the end of each day or significant milestone, jot down what worked well and what didn’t. Reflect on the learning curve of using Svelecte and how its features align with your project needs.

By applying the SCAN methodology, you can systematically approach the task with a focus on continuous improvement and learning. It helps in not just achieving the immediate goal but also in refining your problem-solving and implementation skills for future tasks.What input examples did the advice include?The advice on the SCAN methodology mentioned two types of inputs that should be tracked:

  1. Time: This refers to the amount of time you are dedicating to a task or a learning process. For example, if you're learning a new software tool, you would track how many hours you're spending on this task versus the amount of time you had planned to spend.

  2. Outputs: This is more about the results or deliverables of your efforts. In the context of learning or task completion, an output could be something like completing a specific module of a course, implementing a feature in a software project, or successfully solving a problem you've been working on.

These inputs are crucial for scoring your performance objectively, as they provide tangible metrics to evaluate how well you're adhering to your goals and how effective your efforts are. By scoring both inputs (like time spent) and outputs (like tasks completed or goals achieved), you get a clearer picture of your performance and areas that might need improvement.Outputs is not an inputYou are correct, and I apologize for the confusion in my previous response. Outputs are indeed not inputs; they are the results or accomplishments that stem from your efforts. In the context of the SCAN methodology, "inputs" and "outputs" are distinct categories used for evaluation:

  1. Inputs refer to the resources, efforts, or actions you put into a task. In the context of learning or task management, inputs typically include:

    • Time spent on a task or learning activity.
    • Effort or intensity of focus during study or work sessions.
    • Resources used, like books, courses, or software tools.
  2. Outputs, on the other hand, are the results or products of your inputs. Outputs in a learning or work context might include:

    • Completed tasks or projects.
    • Skills or knowledge acquired.
    • Tangible results like a working software feature, a completed essay, or a successfully passed exam.

In the SCAN process, you track and evaluate both inputs and outputs to assess your performance objectively. This helps in understanding the efficiency and effectiveness of your inputs (such as how well your study time is translating into learning) and the quality or success of your outputs (such as the completion of tasks or achievement of learning objectives).For a daily plan of

  • Make progress on inline-filter feature
  • Interview candidate

How would I apply this SCAN process? Please give extra detail and attention to some good ways to score it Applying the SCAN process to your daily plan, which includes making progress on the inline-filter feature and interviewing a candidate, involves a detailed and systematic approach. Here's how you can apply each step with a focus on scoring:

1. Score:

Objective Measurement:

  • For the Inline-Filter Feature:
    • Input: Time spent on the feature (e.g., 4 hours planned vs. actual time spent).
    • Output: Specific milestones or tasks completed (e.g., design mockup finalized, code for a specific function written and tested).
    • Scoring: Assign points for each completed task (e.g., 2 points for completing design, 3 points for coding and testing a function).
  • For the Interview:
    • Input: Preparation time (e.g., 1 hour reviewing the candidate's profile and preparing questions).
    • Output: Quality of the interview conducted (e.g., comprehensiveness of questions asked, rapport established).
    • Scoring: Rate the interview on a scale of 1-5 based on preparedness and execution.

2. Critique:

After Completing Tasks:

  • Inline-Filter Feature: Review what went well and what didn’t. Did you understand the requirements clearly? Were there any technical challenges?
  • Interviewing: Reflect on the interview process. Were your questions effective? Did you adequately assess the candidate's skills?

3. Adjust:

Based on Critique:

  • Inline-Filter Feature: If a particular aspect took longer than expected, plan for similar tasks better in the future. If a technical challenge arose, consider allocating time for research or seeking help.
  • Interviewing: If you felt unprepared or missed assessing key skills, adjust your interview preparation method, like including more scenario-based questions.

4. Notes:

Document Your Experience:

  • Inline-Filter Feature: Write down what stages of development you completed, challenges faced, and solutions found.
  • Interviewing: Note how the interview went, the effectiveness of your questions, and the candidate's responses.

Scoring Examples in Detail:

Inline-Filter Feature:

  • Time Management: 4 hours planned. Score 1 point for every hour effectively used.
  • Task Completion: List tasks (e.g., design, coding, testing). Assign points to each task based on complexity.

Interview:

  • Preparation: Score out of 5 based on how well you knew the candidate's background and had questions prepared.
  • Execution: Score out of 5 based on how smoothly the interview went and how well you assessed the candidate.

Overall Scoring Strategy:

  • At the end of the day, tally your points. For example, if you planned 5 tasks for the inline-filter feature and completed 3, and your interview scored 4/5 in preparation and 3/5 in execution, your score might look like:
    • Inline-Filter Feature: 6/10 (assuming each task is worth 2 points).
    • Interview: 7/10.
  • Use this scoring to objectively assess your day and identify areas for improvement.

Remember, the key is to set realistic and clear criteria for scoring, allowing you to objectively assess your performance and progress.

SCAN: Learn, Improve, Grow created at Tue Jan 20 1970 17:35:29 GMT+0000 (Coordinated Universal Time)

Related Chats

Instrument Practice Frequency Guide 0.437

Weighted Scoring Model 0.345

Wireframe Mockup for Scores 0.327

Estimation with Fibonacci Pros cons 0.323

AI Workshop: Missing, Improved, Removed 0.316

UX Designer Evaluation Criteria 0.306

Life Direction: Self-Discovery Guide 0.289

Persona Profiles in Incrementals 0.284

Defining Design Document Non-Goals 0.278