This site is not ready yet! The updated version will be available soon.
CS2103/T 2020 Jan-Apr
  • Full Timeline
  • Week 1 [Aug 12]
  • Week 2 [Aug 19]
  • Week 3 [Aug 26]
  • Week 4 [Sep 2]
  • Week 5 [Sep 9]
  • Week 6 [Sep 16]
  • Week 7 [Sep 30]
  • Week 8 [Oct 7]
  • Week 9 [Oct 14]
  • Week 10 [Oct 21]
  • Week 11 [Oct 28]
  • Week 12 [Nov 4]
  • Week 13 [Nov 11]
  • Textbook
  • Admin Info
  • Report Bugs
  • Forum
  • Instructors
  • Announcements
  • File Submissions
  • Tutorial Schedule
  • Java Coding Standard
  • Participation Marks List

  •  Individual Project (iP):
  • Individual Project Info
  • Duke Upstream Repo
  • iP Code Dashboard
  • iP Showcase

  •  Team Project (tP):
  • Team Project Info
  • Team IDs
  • Addressbook-level3
  • Addressbook-level 1,2,4
  • tP Code Dashboard
  • tP Showcase
  • Tools Exams


    Grade Breakdown

    Relevant: [Admin Participation Marks ]

    To receive full 5 marks allocated for participation, meet the criteria A, B, and C.

    A Earn at least half of weekly participation points in at least 10 weeks.

    • In-lecture quiz:
      • Answered at least 80% of the questions correctly: 2 points
      • Answered 40-80% questions correctly: 1 point
    • Post-lecture quiz:
      • Answered at least 80% of the questions correctly: 3 points
      • Answered 60-80% questions correctly: 2 points
      • Answered 40-60% questions correctly: 1 point
    • Missing compulsory administrative requirements e.g., forgot to submit peer evaluations: -2 for each miss

    As the lecture on Week N covers topics for Week N+1, the Lecture N's in-lecture and post-lecture quiz points are counted for Week N+1

    B Received good peer evaluations

    In addition, you can receive a bonus marks in the following ways. Bonus marks can be used to top up your participation marks should your marks from the above falls below 5.

    • Receiving good ratings for all 10 peer evaluations criteria: 1 mark
    • Being very helpful to classmates e.g., multiple helpful posts in forum: 1 mark

    Relevant: [Admin Peer Evaluations → Criteria ]

    Peer evaluation criteria: professional conduct

    • Professional Communication :
      • Communicates sufficiently and professionally. e.g. Does not use offensive language or excessive slang in project communications.
      • Responds to communication from team members in a timely manner (e.g. within 24 hours).
    • Punctuality: Does not cause others to waste time or slow down project progress by frequent tardiness.
    • Dependability: Promises what can be done, and delivers what was promised.
    • Effort: Puts in sufficient effort to, and tries their best to keep up with the module/project pace. Seeks help from others when necessary.
    • Quality: Does not deliver work products that seem to be below the student's competence level i.e. tries their best to make the work product as high quality as possible within her competency level.
    • Meticulousness:
      • Rarely overlooks submission requirements.
      • Rarely misses compulsory module activities such as pre-module survey.
    • Teamwork: How willing are you to act as part of a team, contribute to team-level tasks, adhere to team decisions, etc. Honors all collectively agreed-upon commitments e.g., weekly project meetings.

    Peer evaluation criteria: competency

    • Technical Competency: Able to gain competency in all the required tools and techniques.
    • Mentoring skills: Helps others when possible. Able to mentor others well.
    • Communication skills: Able to communicate (written and spoken) well. Takes initiative in discussions.

    C Tutorial attendance/participation not too low

    Low attendance/participation can affect participation marks directly (i.e., attended fewer than 7) or indirectly (i.e., it might result in low peer evaluation ratings).

    Examples:

    • Alicia earned 1/2, 3/5, 2/5, 5/5, 5/5, 5/5, 5/5, 5/5, 5/5, 5/5, 4/5, 5/5 in the first 12 weeks. As she received at least half of the points in 11 of the weeks, she gets 5 participation marks. Bonus marks are not applicable as she has full marks already.
    • Benjamin managed to get at least half of the participation points in 9 weeks only, which gives him 5-1 = 4 participation marks. But he received good peer ratings for all criteria, and hence get a bonus mark to make it 5/5.
    • Chun Ming met the participation points bar in 8 weeks only, giving him 5-2 = 3 marks. He lost 2 more marks because he received multiple negative ratings for two criteria, giving him 1/5 participation marks.

    Participation marks are available in this page.

    • The important column is the Weeks Count column. It tells you how many weeks you have met the bar for the criterion A). Your target is to hit 10 weeks.
    • Participation of a week is usually calculated based on two quizzes. For example, week 4 score is calculated based on,
      • W4-Q1: previous week's (i.e., lecture 3) in-lecture quiz
      • W4-Q2: previous week's (i.e., lecture 3) post-lecture quiz
    • Participation bar for weeks 1-3 have been simplified as follows (to account for late enrollments, LumiNUS problems etc.):
      • Week 1: submitted pre-module survey or pre-lecture quiz
      • Week 2: submitted at least one of the quizzes
      • Week 3: did reasonably in both quizzes or did well in one of the quizzes
    • Quizzes for Week 4 and thereafter are counted as explained in A above.

    If you have queries about the participation marks, please email cs2103@comp.nus.edu.sg.

    Relevant: [Admin Exams ]

    There is no midterm.

    The final exam has two parts:

    • Part 1: MCQ questions (1 hour, 20 marks)
    • Part 2: Essay questions (1 hour, 20 marks)

    Both papers will be given to you at the start but you need to answer Part 1 first (i.e. MCQ paper). It will be collected 1 hour after the exam start time (even if arrived late for the exam). You are free to start part 2 early if you finish Part 1 early.

    Given the fast pace required by the paper, the large class size, and the need to collect part 1 answer scripts in the middle of the exam, to be fair to all students, invigilators will not answer queries about the questions in the exam paper (but will answer queries related to exam administration).

    • If you have a doubt/query about a question ,or would like to make an assumption about a question, or would like to report a potential error in the exam paper, write down your doubt/query/assumption in the space provided for it at the end of the exam paper.
    • Those doubts/queries/assumptions (if justified) will be taken into account when grading.

    Final Exam: Part 1 (MCQ)

    Each MCQ question gives you a statement to evaluate.

    An example statement

    Testing is a Q&A activity

    Unless stated otherwise, the meaning of answer options are
    A: Agree. If the question has multiple statements, agree with all of them.
    B: Disagree. If the question has multiple statements, disagree with at least one of them
    C, D, E: Not used

    Number of questions: 100

    Note that you have slightly more than ½ minute for each question, which means you need to go through the questions fairly quickly.

    Questions in Part 1 are confidential. You are not allowed to reveal Part 1 content to anyone after the exam. All pages of the exam paper are to be returned at the end of the exam.

    You will be given OCR forms (i.e., bubble sheets) to indicate your answers for Part 1. As each OCR form can accommodate only 50 answers, you will be given 2 OCR forms. Indicate your student number in both OCR forms.

    To save space, we use the following notation in MCQ question. [x | y | z] means ‘x and z, but not y’

    SE is [boring | useful | fun] means SE is not boring AND SE is useful AND SE is fun.

    Consider the following statement:

    • IDEs can help with [writing | debugging | testing] code.

    The correct response for it is Disagree because IDEs can help with all three of the given options, not just writing and testing.

    Some questions will use highlighting to draw your attention to a specific part of the question. That is because those parts are highly relevant to the answer and we don’t want you to miss the relevance of that part.

    Consider the statement below:

    Technique ABC can be used to generate more test cases.

    The word can is highlighted because the decision you need to make is whether the ABC can or cannot be used to generate more test cases; the decision is not whether ABC can be used to generate more or better test cases.

    Markers such as the one given below appears at left margin of the paper to indicate where the question corresponds to a new column in the OCR form. E.g. questions 11, 21, 31, etc. (a column has 10 questions). Such markers can help you to detect if you missed a question in the previous 10 questions. You can safely ignore those markers if you are not interested in making use of that additional hint.


    Some questions have tags e.g., the question below has a tag JAVA. These tags provide additional context about the question. In the example below, the tag indicates that the code given in the question is Java code.


    The exam paper is open-book: you may bring any printed or written materials to the exam in hard copy format. However, given the fast pace required by Part 1, you will not have time left to refer notes during that part of the exam.

    Mark the OCR form as you go, rather than planning to transfer your answers to the OCR form near the end. Reason: Given there are 100 questions, it will be hard to estimate how much time you need to mass-transfer all answers to OCR forms.

    Write the answer in the exam paper as well when marking it in the OCR form. Reason: It will reduce the chance of missing a question. Furthermore, in case you missed a question, it will help you correct the OCR form quickly.

    We have tried to avoid deliberately misleading/tricky questions. If a question seems to take a very long time to figure out, you are probably over-thinking it.

    You will be given a practice exam paper to familiarize yourself with this slightly unusual exam format.

    Final Exam: Part 2 (Essay)

    Yes, you may use pencils when answering part 2.

    Relevant: [Admin Individual Project (iP) Grading ]

    To get full marks, you should achieve at least some iP deliverables in most weeks (i.e., at least in 4 out of weeks 1-7) and achieve more than 80% of all deliverables by the end.

    • Requirements marked as optional or if-applicable are not counted when calculating the percentage of deliverables.
    • When a requirement specifies a minimal version of it, simply reaching that minimal version of the requirement is enough for it to be counted for grading -- however, we recommend you to go beyond the minimal; the farther you go, the more practice you get.

    Relevant: [Admin Team Project (tP) Grading ]

    Note that project grading is not competitive (not bell curved). CS2103T projects will be assessed separately from CS2103 projects. Given below is the marking scheme.

    Total: 45 marks ( 35 individual marks + 10 team marks)

    See the sections below for details of how we assess each aspect.


    1. Project Grading: Product Design [/ 5 marks]

    Evaluates: how well your features fit together to form a cohesive product (not how many features or how big the features are) and how well does it match the target user

    Evaluated by:

    • tutors (based on product demo and user guide)
    • peers from other teams (based on peer testing and user guide)

    For reference, here are some grading instructions given to evaluators:

    Evaluate the product design based on the User Guide and the actual product behavior.

    Target user:

    • target user specified and appropriate: The target user is clearly specified, prefers typing over other modes of input, and not too general (should be narrowed to a specific user group with certain characteristics).
    • value specified and matching: The value offered by the product is clearly specified and matches the target user.
    • optimized for the target user: It feels like a fast typist can be more productive with the app, compared to an equivalent GUI app without a CLI.

    Value to the target user:

    In addition, feature flaws reported in the PE will be considered when grading this aspect.

    These will be considered feature flaws:
    The feature does not solve the stated problem of the intended user i.e., the feature is 'incomplete'
    Hard-to-test features
    Features that don't fit well with the product
    Features that are not optimized enough for fast-typists or target users


    2. Project Grading: Implementation [ 10 marks]

    2A. Code quality

    Evaluates: the quality of the code you have written yourself

    Based on: the parts of the code you claim as written by you

    Evaluation method: manual inspection by tutors + automated-analysis by a script

    For reference, here are some grading instructions given to evaluators:

    • At least some evidence of these (see here for more info)

      • logging
      • exceptions
      • assertions
      • defensive coding
    • No coding standard violations e.g. all boolean variables/methods sounds like booleans. Checkstyle can prevent only some coding standard violations; others need to be checked manually.

    • SLAP is applied at a reasonable level. Long methods or deeply-nested code are symptoms of low-SLAP.

    • No noticeable code duplications i.e. if there multiple blocks of code that vary only in minor ways, try to extract out similarities into one place, especially in test code.

    • Evidence of applying code quality guidelines covered in the module.

    2B. Effort

    Evaluates: how much value you contributed to the product

    Method: Evaluated in two steps.

    Step 1: Evaluate the effort for the entire project. This is evaluated by peers who tested your product, and tutors.

    For reference, here are some grading instructions given to evaluators:

    Quality: Compared to AB3, the quality of this product is,

    Effort: Assume the effort required to create AB3 from scratch is 10 in a scale of 0 to 30. How much effort do you estimate the team put in for this project?

    • Do not give a high value just to be nice. Your responses will be used to evaluate your effort estimation skills.

    Step 2: Evaluate how much of that effort can be attributed to you. This is evaluated by team members, and tutors.

    For reference, here are some grading instructions given to evaluators:

    Evaluate the contribution to the product by each team member.

    • Count all implementation/testing/documentation work as mentioned in that person's PPP.
    • Also look at the actual code written by the person.

    3. Project Grading: QA [ 10 marks]

    3A. Developer Testing:

    Evaluates: How well you tested your own feature

    Based on:

    1. functionality bugs in your work found by others during the PE
    2. your test code (note our expectations for automated testing)
    • There is no requirement for a minimum coverage level. Note that in a production environment you are often required to have at least 90% of the code covered by tests. In this project, it can be less. The less coverage you have, the higher the risk of regression bugs, which will cost marks if not fixed before the final submission.
    • You must write some tests so that we can evaluate your ability to write tests.
    • How much of each type of testing should you do? We expect you to decide. You learned different types of testing and what they try to achieve. Based on that, you should decide how much of each type is required. Similarly, you can decide to what extent you want to automate tests, depending on the benefits and the effort required.

    These are considered functionality bugs:
    Behavior differs from the User Guide
    A legitimate user behavior is not handled e.g. incorrect commands, extra parameters
    Behavior is not specified and differs from normal expectations e.g. error message does not match the error

    3B. System/Acceptance Testing:

    Evaluates: How well you can system-test/acceptance-test a product

    Based on: bugs you found in the Practical Exam. In addition to functionality bugs, you get credit for reporting documentation bugs and feature flaws.

    Notes on how marks are calculated based on PE product testing
    • Of 3A and 3B above, the one you do better will be given a 70% weight and the other a 30% weight so that your total score is driven by your strengths rather than weaknesses.
    • Bugs rejected by the dev team, if the rejection is approved by the teaching team, will not be affect marks of the tester or the developer.
    • The penalty/credit for a bug varies based on,
      • The severity of the bug: severity.High > severity.Medium > severity.Low > severity.VeryLow
      • The type of the bug: type.FunctionalityBug > type.DocumentationBug > type.FeatureFlaw
    • The penalty for a bug is divided equally among assignees.
    • The developers are not penalized for the duplicate bug reports they received but the testers earn credit for the duplicate bug reports they submitted as long as the duplicates are not submitted by the same tester.
    • Obvious bugs earn less credit for the tester and slightly more penalty for the developer.
    • If the team you tested has a low bug count i.e., total bugs found by all testers is low, we will fall back on other means (e.g., performance in PE dry run) to calculate your marks for system/acceptance testing.
    • Your marks for developer testing depends on the bug density rather than total bug count. Here's an example:
      • n bugs found in your feature; it is a difficult feature consisting of lot of code → 4/5 marks
      • n bugs found in your feature; it is a small feature with a small amount of code → 1/5 marks
    • You don't need to find all bugs in the product to get full marks. For example, finding half of the bugs of that product or 4 bugs, whichever the lower, could earn you full marks.
    • Excessive incorrect downgrading/rejecting/ duplicate-flagging, if deemed an unethical attempt to game the system, may be penalized.

    4. Project Grading: Documentation [ 10 marks]

    Evaluates: your contribution to project documents

    Method: Evaluated in two steps.

    Step 1: Evaluate the whole UG and DG. This is evaluated by peers who tested your product, and tutors.

    For reference, here are some instructions given to evaluators:

    UG: Compared to AB3, the quality of this UG is,

    DG: similar to UG

    Step 2: Evaluate how much of that effort can be attributed to you. This is evaluated by team members, and tutors.

    For reference, here are some grading instructions given to evaluators:

    Q: Evaluate the contribution to the UG by each team member. Note that your evaluation must correspond to RepoSense data and the claims made by the PPP of each member.

    Q: Evaluate the contribution to the DG by each team member.

    Q: Which type of these UML diagrams in the DG did you personally add (or significantly modified)?

    • Class Diagrams
    • Object Diagrams
    • Sequence Diagrams
    • Activity Diagrams

    In addition, UG and DG bugs you received in the PE will be considered for grading this component.

    These are considered UG bugs (if they hinder the reader):
    Not enough visuals e.g., screenshots/diagrams
    The visuals are not well integrated to the explanation.
    The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes.
    Not enough examples e.g., sample inputs/outputs.
    The explanation is too brief or unnecessarily long.
    The information is hard to understand for the target audience. e.g., using terms the reader might not know
    The document looks messy, or not well-formatted.

    These are considered DG bugs (if they hinder the reader):

    These are considered UG bugs (if they hinder the reader):
    Not enough visuals e.g., screenshots/diagrams
    The visuals are not well integrated to the explanation.
    The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes.
    Not enough examples e.g., sample inputs/outputs.
    The explanation is too brief or unnecessarily long.
    The information is hard to understand for the target audience. e.g., using terms the reader might not know
    The document looks messy, or not well-formatted.

    UML notation incorrect or not compliant with the notation covered in the module.
    Some other type of diagram used when a UML diagram would have worked just as well.
    The diagram used is not suitable for the purpose it is used.
    The diagram is too complicated.
    Excessive use of code e.g., a large chunk of code is cited when a smaller extract of would have sufficed.


    5. Project Grading: Project Management [ 5 + 5 = 10 marks]

    5A. Process:

    Evaluates: How well you did in project management related aspects of the project, as an individual and as a team

    Based on: tutor/bot observations of project milestones and GitHub data

    Milestones need to be reached the midnight before of the tutorial for it to be counted as achieved. To get a good grade for this aspect, achieve at least 60% of the recommended milestone progress.

    Other criteria:

    • Good use of GitHub milestones
    • Good use of GitHub release mechanism
    • Good version control, based on the repo
    • Reasonable attempt to use the forking workflow
    • Good task definition, assignment and tracking, based on the issue tracker
    • Good use of buffers (opposite: everything at the last minute)
    • Project done iteratively and incrementally (opposite: doing most of the work in one big burst)

    5B. Team-tasks:

    Evaluates: How much you contributed to team-tasks

    Based on: peer evaluations, tutor observations

    To earn full marks, you should have done a fair share of the team tasks. You can earn bonus marks by doing more than your fair share.

    Relevant: [Admin tP Scope → Examples of team-tasks ]

    Here is a non-exhaustive list of team-tasks:

    1. Necessary general code enhancements e.g.,
      1. Work related to renaming the product
      2. Work related to changing the product icon
      3. Morphing the product into a different product
    2. Setting up the GitHub, Travis, AppVeyor, etc.
    3. Maintaining the issue tracker
    4. Release management
    5. Updating user/developer docs that are not specific to a feature e.g. documenting the target user profile
    6. Incorporating more useful tools/libraries/frameworks into the product or the project workflow (e.g. automate more aspects of the project workflow using a GitHub plugin)


    Tools Exams