So, as we've seen when we declare our variables, we give them a type, and the most fundamental types in Java are what are called primitive data types, and these are the data types that are built into the language. Now, when you hear the term primitive data type, you may think there's something kind of lowly about them, you know, something kind of, you know, less than modern about them. But that's not true at all. Primitive data types are actually very important. They're really the foundation of all other types that we use in Java, so they're really that kind of strong foundation that we're going to build on for any other data types we use in our programs. Now there are four categories of primitive types in Java. There are the integer types, floating point, character, and Boolean. So, let's look first at the integer types. Now there are four different integer types, but the difference then really is just the size of storage that they take up. But that difference in the size of storage they take up affects the range of values that could be stored there. All right, so the smallest integer type is the byte type. So, it takes 8 bits, so it can only store between minus 128 and positive 127. And so, we see when declare it, we just declare a variable of type byte, and we just assign the integer, excuse me, the integer literal to it. All right, so here we have a byte numberOfEnglishLetters = 26. The next larger integer type is short, which takes up 16 bits. Let's let you start with, store between negative 30,000 and positive 32,000. Use it the same way. So, we have here short metiamide is 5280. Probably the most used integer type is the one called int. That's a 32‑bit integer. And being 32 bits, let's a store between minus 2 billion and positive 2 billion. And then the big integer type is what we call a long. That's a big 64‑bit integer. And you see it can store huge values in it. The key thing to notice though is that when you use a long literal value, you must put that capital L at end of it. Now Java also has floating‑point types. Now, the floating‑point types conform to the IEEE standard for floating point, and that may or may not be meaningful to you. What it really comes down to is that floating points allow you to store values that have a fractional portion to them. Basically, it supports positive, negative, and zero values that have some fractional portion. There's a lot of nuances to the way floating points work, and they're kind of outside the scope of this course, but I've got that URL on the screen there for you. If you want to know a lot more about the details in terms of how floating points work and kind of the oddities of them, I encourage you to check that out. But basically, we have our two floating‑point types in Java. First, we have the float type, which is a 32‑bit floating‑point value. Notice that when we use the float type, we must put the f at the end of any constants for it. So, when we declare this float called milesInAMarathon, we say 26.2f, saying it's a floating‑point value. And then we also have double, which is a 64‑bit floating‑point value. If you just use a literal that has a decimal in it, the compiler assumes it's a double, but you can also make it explicitly a double by putting a d at the end. So, if we look here, we have double atomWidthInMeters. You see that 0.0000000001d denotes that that is a double literal. And our last two primitive types are character and Boolean. And a character, or, the char type, stores a single Unicode character. And basically, you denote the literals of these by just using single apostrophes or single quotes around the constants. So, if I say char, the regularU equals, and I put 'U', that assigns that char into it. Now note this is different than strings. We'll talk about strings later. Char is just a single character value. And because the char type supports Unicode, you can specify any valid Unicode character in there. So, if you want to assign a Unicode character that you don't have on your keyboard, you can use the Unicode code point by using that \u notation. So, you see here I've got this char accentedU with the '\u00DA. Says that that is actually a U with an accent on it. And then finally, we have our Boolean types. Boolean types store true and false. The literals for that are true and false, so if I say boolean iLoveJava = true.
What exactly is the difference between programming and coding? The other day, one of my friends who is not from a computer science background, asked me this question. Even after learning many different programming languages and doing several projects, I could not answer him correctly. I said both are the same. But why do we use two different terms, if both are the same? That led me to some research, and I thought I’d share what I found. It’s not that complicated. And their definitions allow for a lot of overlap. We often recognize the terms coding and programming as synonymous because both are often used interchangeably. what is the difference between programming and coding? Coding is the act of expressing programmatic ideas in computer language. Programming is crafting ideas that can be executed repeatedly by a machine, not necessarily a computing device. While both the terms are synonymous with each other and are often used interchangeably, t...
Comments
Post a Comment
We appreciate your valuable Suggestions/Feedback