cs2103@comp.nus.edu.sg
if you did not receive the submission link on time.Note that project grading is not competitive (not bell curved). CS2103T projects will be assessed separately from CS2103 projects. Given below is the marking scheme.
Total: 45 marks ( 35 individual marks + 10 team marks)
See the sections below for details of how we assess each aspect.
Evaluates: how well your features fit together to form a cohesive product (not how many features or how big the features are) and how well does it match the target user
Evaluated by:
For reference, here are some grading instructions given to evaluators:
Evaluate the product design based on the User Guide and the actual product behavior.
Target user:
target user specified and appropriate
: The target user is clearly specified, prefers typing over other modes of input, and not too general (should be narrowed to a specific user group with certain characteristics).value specified and matching
: The value offered by the product is clearly specified and matches the target user.optimized for the target user
: It feels like a fast typist can be more productive with the app, compared to an equivalent GUI app without a CLI.Value to the target user:
In addition, feature flaws reported in the PE will be considered when grading this aspect.
These will be considered feature flaws:
The feature does not solve the stated problem of the intended user i.e., the feature is 'incomplete'
Hard-to-test features
Features that don't fit well with the product
Features that are not optimized enough for fast-typists or target users
2A. Code quality
Evaluates: the quality of the code you have written yourself
Based on: the parts of the code you claim as written by you
Evaluation method: manual inspection by tutors + automated-analysis by a script
For reference, here are some grading instructions given to evaluators:
At least some evidence of these (see here for more info)
- logging
- exceptions
- assertions
- defensive coding
No coding standard violations e.g. all boolean variables/methods sounds like booleans. Checkstyle can prevent only some coding standard violations; others need to be checked manually.
SLAP is applied at a reasonable level. Long methods or deeply-nested code are symptoms of low-SLAP.
No noticeable code duplications i.e. if there multiple blocks of code that vary only in minor ways, try to extract out similarities into one place, especially in test code.
Evidence of applying code quality guidelines covered in the module.
2B. Effort
Evaluates: how much value you contributed to the product
Method: Evaluated in two steps.
Step 1: Evaluate the effort for the entire project. This is evaluated by peers who tested your product, and tutors.
For reference, here are some grading instructions given to evaluators:
Quality: Compared to AB3, the quality of this product is,
Effort: Assume the effort required to create AB3 from scratch is 10 in a scale of 0 to 30. How much effort do you estimate the team put in for this project?
- Do not give a high value just to be nice. Your responses will be used to evaluate your effort estimation skills.
Step 2: Evaluate how much of that effort can be attributed to you. This is evaluated by team members, and tutors.
For reference, here are some grading instructions given to evaluators:
Evaluate the contribution to the product by each team member.
- Count all implementation/testing/documentation work as mentioned in that person's PPP.
- Also look at the actual code written by the person.
3A. Developer Testing:
Evaluates: How well you tested your own feature
Based on:
These are considered functionality bugs:
Behavior differs from the User Guide
A legitimate user behavior is not handled e.g. incorrect commands, extra parameters
Behavior is not specified and differs from normal expectations e.g. error message does not match the error
3B. System/Acceptance Testing:
Evaluates: How well you can system-test/acceptance-test a product
Based on: bugs you found in the Practical Exam. In addition to functionality bugs, you get credit for reporting documentation bugs and feature flaws.
severity.High
> severity.Medium
> severity.Low
> severity.VeryLow
type.FunctionalityBug
> type.DocumentationBug
> type.FeatureFlaw
n
bugs found in your feature; it is a difficult feature consisting of lot of code → 4/5 marksn
bugs found in your feature; it is a small feature with a small amount of code → 1/5 marksEvaluates: your contribution to project documents
Method: Evaluated in two steps.
Step 1: Evaluate the whole UG and DG. This is evaluated by peers who tested your product, and tutors.
For reference, here are some instructions given to evaluators:
UG: Compared to AB3, the quality of this UG is,
DG: similar to UG
Step 2: Evaluate how much of that effort can be attributed to you. This is evaluated by team members, and tutors.
For reference, here are some grading instructions given to evaluators:
Q: Evaluate the contribution to the UG by each team member. Note that your evaluation must correspond to RepoSense data and the claims made by the PPP of each member.
Q: Evaluate the contribution to the DG by each team member.
Q: Which type of these UML diagrams in the DG did you personally add (or significantly modified)?
- Class Diagrams
- Object Diagrams
- Sequence Diagrams
- Activity Diagrams
In addition, UG and DG bugs you received in the PE will be considered for grading this component.
These are considered UG bugs (if they hinder the reader):
Not enough visuals e.g., screenshots/diagrams
The visuals are not well integrated to the explanation.
The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes.
Not enough examples e.g., sample inputs/outputs.
The explanation is too brief or unnecessarily long.
The information is hard to understand for the target audience. e.g., using terms the reader might not know
The document looks messy, or not well-formatted.
These are considered DG bugs (if they hinder the reader):
These are considered UG bugs (if they hinder the reader):
Not enough visuals e.g., screenshots/diagrams
The visuals are not well integrated to the explanation.
The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes.
Not enough examples e.g., sample inputs/outputs.
The explanation is too brief or unnecessarily long.
The information is hard to understand for the target audience. e.g., using terms the reader might not know
The document looks messy, or not well-formatted.
UML notation incorrect or not compliant with the notation covered in the module.
Some other type of diagram used when a UML diagram would have worked just as well.
The diagram used is not suitable for the purpose it is used.
The diagram is too complicated.
Excessive use of code e.g., a large chunk of code is cited when a smaller extract of would have sufficed.
5A. Process:
Evaluates: How well you did in project management related aspects of the project, as an individual and as a team
Based on: tutor/bot observations of project milestones and GitHub data
Milestones need to be reached the midnight before of the tutorial for it to be counted as achieved. To get a good grade for this aspect, achieve at least 60% of the recommended milestone progress.
Other criteria:
5B. Team-tasks:
Evaluates: How much you contributed to team-tasks
Based on: peer evaluations, tutor observations
To earn full marks, you should have done a fair share of the team tasks. You can earn bonus marks by doing more than your fair share.
Relevant: [
Here is a non-exhaustive list of team-tasks:
As most of the work is graded individually, it is OK to do less or more than equal share in your project team.
Relevant: [
Tips:
Contribute to all aspects of the project e.g. write backend code, frontend code, test code, user documentation, and developer documentation. Reason: If you limit yourself to certain aspects only, you could lose marks allocated for the aspects you did not do. In addition, the final exam assumes that you are familiar with all aspects of the project.
Do all the work related to your enhancement yourself. Reason:If there is no clear division of who did which enhancement, it will be difficult to divide project credit (or assign responsibility for bugs detected by testers) later.
Divide the components of the product among team members. Notwithstanding the above, you are still expected to divide the components of the product among team members so that each team member is in charge of one or more components. While others will be modifying those components as necessary for the features they are implementing, your role as the in charge of a component is to guide others modifying that component (reason: you are supposed to be the most knowledgeable about that component) and protect that component from degrading e.g., you can review others' changes to your component and suggest possible changes.
Percentile | 25 | 50 | 75 |
---|---|---|---|
LoC | ~1000 | ~1500 | ~2500 |
Team-tasks are the tasks that someone in the team has to do. Marks allocated to team-tasks will be divided among team members based on how much each member contributed to those tasks.
Here is a non-exhaustive list of team-tasks:
Roles indicate aspects you are in charge of and responsible for. E.g., if you are in charge of documentation, you are the person who should allocate which parts of the documentation is to be done by who, ensure the document is in right format, ensure consistency etc.
This is a non-exhaustive list; you may define additional roles.
Model
, UI
, Storage
, etc. If you are in charge of a component, you are expected to know that component
well, and review changes done to that component in v1.3-v1.4.Please make sure each of the important roles are assigned to one person in the team. It is OK to have a 'backup' for each role, but for each aspect there should be one person who is unequivocally the person responsible for it.
Normally, the prof will respond within 24 hours if it was an email sent to the prof or a forum post directed at the prof. If you don't get a response within that time, please feel free to remind the prof. It is likely that the prof did not notice your post or the email got stuck somewhere.
Similarly we expect you to check email regularly and respond to emails written to you personally (not mass email) promptly.
Not responding to a personal email is a major breach of professional etiquette (and general civility). Imagine how pissed off you would be if you met the prof along the corridor, said 'Hi prof, good morning' and the prof walked away without saying anything back. Not responding to a personal email is just as bad. Always take a few seconds to at least acknowledge such emails. It doesn't take long to type "Noted. Thanks" and hit 'send'.
The promptness of a reply is even more important when the email is requesting you for something that you cannot provide. Imagine you wrote to the prof requesting a reference letter and the prof did not respond at all because he/she did not want to give you one; You'll be quite frustrated because you wouldn't know whether to look for another prof or wait longer for a response. Saying 'No' is fine and in fact a necessary part of professional life; but saying nothing is not acceptable. If you didn't reply, the sender will not even know whether you received the email.
Sometimes, small things matter in big ways. e.g., all other things being equal, a job may be offered to the candidate who has the neater looking CV although both have the same qualifications. This may be unfair, but that's how the world works. Students forget this harsh reality when they are in the protected environment of the school and tend to get sloppy with their work habits. That is why we reward all positive behavior, even small ones (e.g., following precise submission instructions, arriving on time etc.).
But unlike the real world, we are forgiving. That is why you can still earn full marks for participation even if you miss a few things here and there.
Related article: This Is The Personality Trait That Most Often Predicts Success (this is why we reward things like punctuality).