People who program in Java or Python might not understand this at first.
you do not get that feature for free in C.
cs50, lec2
People who program in Java or Python might not understand this at first.
you do not get that feature for free in C.
cs50, lec2
#include <cs50.h>
#include <stdio.h>
float average(int length, int array[]);
int main(void)
{
int n=get_int("number of scores:");
int scores[n];
for(int i=0; i<n:i++)
{
scores[i] = get_int("Score %i:", i+1);
}
printf("Average: %.1f\n", average(n, scores))
}
float average(int length, int array[])
{
int sum=0;
for(int i=0;i<length;i++)
{
sum += array[i];
}
return (float) sum / (float) length
}
cs50, lec2
if you have two integers that you want to divide, but do not want everything after the decimal point to be thrown away, you can type cast so that the computer treats the integers as floats, thereby ending up with more precise results
const int N;
int sum;
average = (float) sum / (float) N;
cs50, lec2
placed outside of main()
const
and variable name capitalized
....... that is the convention
This is so you wont have to go visually fishing for a certain number value throughout your code int the future.
#include <stdio.h>
const int N = 3; //make scope global; capital denotes constant
int main(void)
{
int scores[3];
scores[0]=72;
scores[1]=73;
scores[2]=33;
printf("average: i%\n", (scores[0]+scores[1]+scores[2])/N );
}
cs50, lec2
#include <stdio.h>
#include <cs50.h>
int main(void)
{
int scores[3];
scores[0]=72;
scores[1]=73;
scores[2]=33;
printf("average: %i\n", (scores[0] + scores[1] + scores[2])/3 );
}
cs50, lec2
#include <stdio.h>
#include <cs50.h>
int main(void)
{
int score1= 72;
int score2= 73;
int score3=33;
printf("average: %i\n", (score1+score2+score3)/3);
}
cs50, lec2
refer to hi.c in the previous post
type casting: convert characters to the ASCII number value that represents those characters
#include <stdio.h>
int main(void)
{
char c1='H';
char c2='i';
char c3='!';
printf("i% i% i%\n", (int) c1, (int) c2, (int) c3);
}
in reality, clang is actually smart enough that even if you don't explicitly cast the chars to ints, it will know that the casting needs to occur and implicitly cast the chars to ints automatically
#include <stdio.h>
int main(void)
{
char c1='H';
char c2='i';
char c3='!';
printf("i% i% i%\n", (int) c1, (int) c2, (int) c3);
}
source: cs50, lec2
#include <stdio.h>
int main(void)
{
char c1='H';
char c2='i';
char c3='!';
printf("%c %c %c\n", c1, c2, c3);
}
source: cs50, lec2
char c1 = 'a';
string s1 = "apple";