This is an interesting question, but the answer is no, because of the semantics of the stack and program counter hardware. If you were to design a new architecture, you might gain some advantage for some subset of computable problems, though I cannot see any right now.
Some history of computing theory, and where these two parts of a modern CPU come into the picture, may be of use here.
Way back in the history of computing, even before I was born, a genius by the name of Alan Turing constructed a gedankenexperiment I consider equal to Einsteins' - a mental machine known as the Universal Turing Machine. He did this to produce a mathematically tractable tool for exploring the limits of mechanical computing, and what if any kinds of problems could or could not be mechanically computed. A Turing Machine consists of a tape bearing coded instructions and data, a read/write head that can read or modify the data under it on the tape, a mechanism for moving the tape from one position to another, and a device that would act on the data so read and produce movement and more marks on the tape. This last bit was always a bit cloudy, or hand-wavy, to me, but essentially describes what became the ALU. And the Program Counter of today is what decides which symbol on the tape is under the read/write head. Note that there is no stack in this picture.
The genius is that this machine really is Universal. He proved that if a given problem was computable at all, it could be computed on a Turing Machine. This had many consequences. In particular, any design for a computer was not considered complete unless it could be proven equivalent to a Turing Machine. Those that pass this test are called Turing Complete. There were (and are I would guess) computing devices built to solve specific problems that are not Turing Complete (I know because I designed one such myself eons ago), but there are no computers that can do any more than a Turing Machine. Only faster. I reserve judgement on Quantum Computers here, but they rely on non-mechanical processes so may be immune.
The other major consequence was that what became our familiar machines of today rested on the insight of Von Neuman, who realized a way to build a computing machine that was provably Turing Complete. The Von Neuman architecture is what every computer you are likely to touch today is based on. And it itself did not originally include a stack in hardware.
The idea of a stack grew out of theoretical advances in CS. In particular, the stack is covered in two main classes of Finite Automata (of which the TC is but one example) - the Single Stack Push-down Automata and the Dual Stack Push-down Automata. These were first proven to be Turing Complete, then later the ideas were aplied to real world hardware. And they really help. I was fortunate to learn to code ASM on a machine with a stack in 1967, but then the misfortune have to write a language parser in 1969 on a machine without a stack. But all have a Program Counter. Have to move that tape, have to progress down its list of symbols. The PC, as I said, models the tape position mechanism, and the computers memory models the tape itself. What the stack brought to the picture was a convenient way to stash temporary state. Most machines only have one, but you can see Dual Stack PDA at work in languages like FORTH. In fact, it was through the PDA studies I came across first Threaded Link Interpreters, then FORTH as an exemplar of such.
Now the challenge: I have already mentioned several times "proven to be Turing Complete". Or not. This might sound like a very abstract and esoteric thing to do. But, the Turing Machine model is conceptually quite simple, and in order to prove some other computing model is Turing Complete, all you have to do is implement a Turing Machine on it. Which is how I proved for myself that PDA's were Turing Complete in my M.Sc. class. So , if this proposed model has any legs, you should be able to implement a Turing Machine with it.
Have at it! Let us know how it goes.