Programming Ugly

Pages: 12345
Catfish blurted:

From what I read so far, my impression is that DCI is a methodology promoting the importance of helper objects that link the working objects, and not so much a paradigm.


Congratulations! You've just invented epicycles!

See how paradigm shifts go?

I just gave a talk at the Java conference in Amsterdam, based solely on the past four years collected quotes like this, about how people faced with a new paradigm go through all the Kuhnian stages — denial, discounting, rationalization and co-option. I wish I could have waited until I got these.

People are so predictable. I predict that if you give yourself time, you'll eventually co-opt the idea.
I truly expected an intelligent counter-argument, but all I got was a lousy attempt to mock me! :(
No, not to mock — to help point out to you where you are currently situated. Well, that, combined with an attempt to meet you at your operational posture.

Think of it as a Zen master offering a Koan to the student. That is the level of the mindset shift you are faced with here. The problem — one more time — is that it is difficult to get incrementally from where you are, to where you need to be to understand DCI. You keep coming back day, after day, after day. Take a month. Read. Think. Think again. Come back when you've got what you believe to be some DCI code in Ruby. Then we'll be able to have a concrete argument: I think that's a prerequisite to having an intelligent argument.

Again, your response is a caricature of someone stuck in a class paradigm


I'm perfectly ok with doing OOP without classes - the example used classes/static interfaces just because this is a C++ forum. Also, I've been using most of the techniques described in the paper by Reenskaug and Coplien for some time, using the Scala language.

Whatever you say, this whole DCI thing looks to me like a solution seeking a problem. I'm 99% sure no-one will use that DCI concept 5-10 years from now. Why? Because of the following fundamental things:

1. It is extremely hard to explain the essence of DCI to a programmer while going with him by lift from ground to the 10th floor. It is easy to explain the underlying concepts. But DCI as a whole is somehow blurry.

2. The DCI is composed from many other very simple concepts like mixins or multiple dispatch and brings little additional value to what is currently used and supported (most OOP languages have mixins, traits, roles, etc, so they supported DCI almost directly for quite a long time). For me this perfectly sits in the OOP world. This is *not* a new paradigm. This is incremental improvement, one of many possible design patterns.

3. Successfull new design patterns and paradigms are *simple* in essence, and usually discovered and not invented by some theoretists by combining many simple patterns into one complex system. Think: lambda expressions, private state encapsulation, factory methods, virtual dispatch. Things that are often so different that they require new language constructs. And that are so influential that good number of languages implements them as core features.

BTW: It very much resembles another failed "wish to be" paradigm: AOP. Some AOP techniques are used in practice, but almost no-one builds systems the AOP way. So this is a design pattern, not a paradigm shift.

Update:
Back to the original question - if we are not talking C++, I could make the SpellCheck class a trait/role (in Ruby or Scala) and mix it into any TextBuffer that would need to be spellchecked. But that doesn't really solve any real problem - the coupling still exists. The SpellCheck still has to know how to get the text and how to get the valid words from the dictionary. So essentially it still has to rely on some fixed interface, even if it is not given explicitly in the code. If this interface changes, you have to fix the other components of the system, otherwise you won't be able to mix that "spellchecking role" in - it will fail either at compile time (Scala) or run time (Ruby). There is no benefit in better cohesion or coupling. There might be a slight programming "convenience" benefit, because you can actually write a "TextBuffer with SpellCheck" type and you have all the methods/services in one point, so in case of Scala, the IDE can give you a nice list of methods, etc. So it is easier to use, code is probably more readable - you can also pass one object with additional spellchecking role instead of two objects. But calling it a new paradigm is IMHO ridiculous.
Last edited on
Cope wrote:

So let me get this straight. 1. You didn't read the links I pointed you to. 2. You are looking for insight. So, tell you what: Open your mind. Just believe in ESP. I'm transmitting to you now.


Yes, let's get this straight. I am certainly not seeking your guidance. You are the one making claims and providing nothing more than post after post of "take the red pill". Users have asked for examples, explanations, and even credentials and not a thing has changed in pages of long-winded, condescending nonsense.

moorecm wrote:
I'll admit, I didn't read any of those links. Regardless, there are some big claims here and not an ounce of insight being provided. For example, "Most people use OOP wrong." Is this true or should it actually be written as, "I think many people use OOP wrong." Or possibly many other variations such as, "I think some people wrongly apply a single OOP approach to all problems, which is inherently wrong."


Cope wrote:
I've worked with hundreds of projects over the past 30 years and I stand my ground. How many have you looked at?


That's all you have to say? I'm not here to impress. This sort of pissing match doesn't prove anything.
Last edited on
I'm with moorecm, Cope seems pretty aggressive. It seems like you're more interested in showing everyone else that they're wrong than actually helping anyone.
I'm perfectly ok with doing OOP without classes - the example used classes/static interfaces just because this is a C++ forum. Also, I've been using most of the techniques described in the paper by Reenskaug and Coplien for some time, using the Scala language.


This would be a much easier discussion around a Ruby example than around C++. Start with the right training wheels and then grow. Scala isn't bad but, like C++, it supports a very static implementation of DCI.

Whatever you say, this whole DCI thing looks to me like a solution seeking a problem. I'm 99% sure no-one will use that DCI concept 5-10 years from now. Why? Because of the following fundamental things:


You're probably right: no one ever got the original OOP right, either. Or patterns (your use of the term "pattern" below is laughable). Or most other new ideas. Programming is a craft with extremely high inertia.

But don't count it out yet. Watch language evolution in the near term to provide what DCI needs to provide more convenient expression. Look for the forthcoming book on DCI in Ruby. There is already work afoot to create a new language that supports DCI directly.

1. It is extremely hard to explain the essence of DCI to a programmer while going with him by lift from ground to the 10th floor. It is easy to explain the underlying concepts. But DCI as a whole is somehow blurry.


Do you mean like OOP? Or anything else in programming?

As I said: many people have been able to get through the learning curve. You might consider that your being different says more about you than about DCI.

2. The DCI is composed from many other very simple concepts like mixins or multiple dispatch and brings little additional value to what is currently used and supported (most OOP languages have mixins, traits, roles, etc, so they supported DCI almost directly for quite a long time). For me this perfectly sits in the OOP world. This is *not* a new paradigm. This is incremental improvement, one of many possible design patterns.


I have been asked to give the Heart of Software Development Lecture at AOSD 2012, where i wil talk about this topic. I will present a paper there, "Six Wise Men and the Elephant" that shows how AOP, mixins, and these other ideas are partial foreshadowings of DCI. They are over-constrained reactions to the problems that people experienced with OOP, but they aren't DCI. They are just variations on the old paradigm.

3. Successfull new design patterns and paradigms are *simple* in essence, and usually discovered and not invented by some theoretists by combining many simple patterns into one complex system.


Oh, DCI is very simple, and it reflects the way early OO programmers thought about and implemented their designs. Something got lost along the way, when classes were introduced.

There is a reason you can't see that, I suppose, but you're probably not interested in my insights on that.
I'm still curious if you can explain the core of DCI concept in, lets say 256 characters?
I can do that for core of OOP - OOP is just building software from objects interacting with each other, where an object = private data + public methods to transform that data.

If DCI is just allowing objects to take different roles - then I tell you - it is known and used in OOP programming long long before someone came with the term DCI.
I'm still curious if you can explain the core of DCI concept in, lets say 256 characters?


The core concept is that code should map onto the mental model of those interacting with it.

I can do that for core of OOP - OOP is just building software from objects interacting with each other, where an object = private data + public methods to transform that data.


That sounds like a way to implement objects, rather than what they are.

If DCI is just allowing objects to take different roles - then I tell you - it is known and used in OOP programming long long before someone came with the term DCI.


No, it is much more than that. But thanks for the tip. It's always good to have my historical experience reinforced by an independent perspective.
I know some people who plan on helping me with a project of mine. Since that, I started working on a code-standard specification document. Look at this:

-- INDENTATION --
Everytime you nest another block of code, keep it indented with yet another
TAB-character more than the higher nest. For example:

1
2
3
4
5
6
7
8
for(w16 personIndex = 0; personIndex != peopleNumber; ++personIndex)
{
	Person* person = &people[personIndex];
	if (person->cool)
	{
		party->Invite(person);
	}
}


I couldn't think of any real examples that would intuitively make sense. So this became somewhat similar to Cope's style.
Last edited on
Your arguments sound somewhat laughable, nonetheless, I'll take them serious.

The core concept is that code should map onto the mental model of those interacting with it.
This doesn't sound like a defined way of doing anything whatsoever. This means that it can't offer a good solution without adding all the known programming paradigms and methods together. What do you mean by the mental model of said people? This may, of course, just be me not understanding your arguments, but that would imply that, as said before, the core concept is too vague (or your explanation of it is).

That sounds like a way to implement objects, rather than what they are.
Actually, his description of OOP was way more accurate than your description of DCI. He tells us that OOP is built around objects (data and methods to transform said data) that interact with one another. You fail to give such accurate descriptions about DCI.

No, it is much more than that. But thanks for the tip.
Tell me then, WHAT is it? Honestly, be specific.
Last edited on
Tell me then, WHAT is it? Honestly, be specific.


I know my name's not Cope but it starts with a C so it's close enough.

DCI is about objects/global functions managing other objects through multiple inheritance and friendship.
This process of managing objects is likened to "giving them a role".
The code is supposed to look tidier and be easier to review.

As others have pointed out, this is not so much a new paradigm as it is a design pattern.

I'm sure Cope will correct me if I got anything wrong, clearly and concisely explaining the mistakes I made, adding more DCI specifics to the list, and not attempting to ridicule me in the process.
...
Last edited on
Besides that I'm not talking about natural language (those references don't make sense).
In the linguistic world
'does he something' is very different from 'he does something'.
You're the one who brought up linguistics. If you're talking about some sort of abstract linguistics dealing with the foundation of language then there's even less of a difference between two languages such that "he does something" and "something does he" are each other's translations. The languages are, shall we say, isomorphic. Any preference you may have to either is entirely subjective and not based on any actual practical difference.

I'm rather talking about progress in the programming language. Which actually goes in the direction of the natual language.
Yeah, totally. Just the other day, a friend told me "you.give(room.objects["stapler"],this)".
Seriously, though. A programming language that resembled natural language in any meaningful way (i.e. more than just syntactic sugar composed of a reordering of tokens or introduction of meaningless keywords) wouldn't be very useful. That "progress" you're talking about is entirely superficial and fictitious.

The real progress has been in finding new ways to represent computation. The differences between procedural programming, FP, OOP, and metaprogramming are much deeper than "Print(list)" and "list.Print()". You conception of programming is overly syntax-centric and I think you'll have a hard time finding anyone who agrees with it.

The C++ version makes much more sense.
The C version tells the reader that it prints and that it uses a list. It's unknown what 'Print()' exactly does.
The C++ version tells the reader that list prints it's printable members (anything else I'd consider a bug).
I'll rephrase CodeMonkey's question, then:
Which of these is clearer?
1
2
o12.f71();
f71(o12);
Last edited on
The core concept is that code should map onto the mental model of those interacting with it.


So basically some set of code is supposed to magically change based on who's using it? >_>
Cope got what he came for, an apology from OP.
Everything else was just trolling fun.
This doesn't sound like a defined way of doing anything whatsoever. This means that it can't offer a good solution without adding all the known programming paradigms and methods together. What do you mean by the mental model of said people? This may, of course, just be me not understanding your arguments, but that would imply that, as said before, the core concept is too vague (or your explanation of it is).


I was not asked for a defined way to do it. I was asked for a 256-byte summary. Make up your mind.

Can't you see you're stuck in one paradigm looking at another?

Actually, his description of OOP was way more accurate than your description of DCI. He tells us that OOP is built around objects (data and methods to transform said data) that interact with one another. You fail to give such accurate descriptions about DCI.


His description was somewhat precise, but wasn't quite accurate and, more importantly, was superficial.

So basically some set of code is supposed to magically change based on who's using it? >_>


In a way: yes. Think of it as component-oriented programming — except that a new component is dynamically created every use case to suit the end user mental model.

His description was somewhat precise, but wasn't quite accurate and, more importantly, was superficial.


What was superficial or inaccurate in my definition? Care to elaborate?
Last edited on
What was superficial or inaccurate in my definition? Care to elaborate?


Compare it to Kay's definition. He invented the term.

That people use their own definition ("in my definition") is both cause and effect with respect to a programming language culture. If you can't see how to do OOP in C++, then you'll contort the definition to fit what you know how to do. It is the preference of a technology-based community such as this one more to be buzzword-compliance than to ply the "why" of doing something. Of course, an initial misunderstanding of how to do OO will shape the way you use the language, and the way the community uses the language.

Alan Kay said that he did have C++ in mind when he coined the term.

And Bjarne Stroustrup has never called C++ an object-oriented programming language.
Pages: 12345